A simple and efficient fusion framework for surveillance images
- Details
- Category: Information technologies, systems analysis and administration
- Last Updated on 19 January 2017
- Published on 19 January 2017
- Hits: 4303
Authors:
Lili Chen, Laboratory of Intelligent Information Processing, Suzhou University, Suzhou, China
Hongjun Guo, The Key Laboratory of Intelligent Computing & Signal Processing of MOE, Anhui University, Hefei, China
Abstract:
Purpose. Aiming at solving the fusion issue of surveillance images, a simple and efficient fusion framework using block compressed sensing sampling (BCSS) is proposed in this paper, which consists of two fusion methods using basic-BCSS and sliding-BCSS respectively.
Methodology. With the superiority of low sampling ratio and low computational complexity, compressed sensing (CS) theory is widely used in signal processing. The basic-BCSS is a basic version of block based CS, in which the source image is divided into distinct blocks, and the sliding-BCSS is a modified version of basic-BCSS proposed for the first time, in which the image is divided into small sliding blocks for each pixel with appropriate padding. The basic idea of the fusion framework is to select the blocks or pixels with greater L2-norm of the BCSS measurement outputs of the divided blocks in spatial domain.
Findings. The fusion framework is tested on three pairs of grayscale surveillance images, including infrared and visible images, and millimeter-wave and visible images, and compared with several traditional fusion methods. Experimental results demonstrate that the proposed fusion framework can significantly improve the fusion quality and speed simultaneously.
Originality. A simple and efficient fusion framework using BCSS in spatial domain is proposed for the first time.
Practical value. It has a certain practical meaning for real-time surveillance applications.
References/Список літератури
1. Li, S., Kang, X., Fang, L., Hu, J. and Yin, H., 2016. Pixel-level image fusion: A survey of the state of the art. Information Fusion, Vol. 33, pp. 100–112.
2. Adu, J., Gan, J., Wang, Y. and Huang, J., 2013. Image fusion based on nonsubsampled contourlet transform for infrared and visible light image. Infrared Physics & Technology, Vol. 61, pp. 94–100.
3. Hu, D., Shi, H., and Jiang, W., 2016. Infrared and visible image fusion using multiscale top-Hat transform and modified adaptive dual-channel pcnn. Revista Tecnica De La Facultad De Ingenieria Universidad Del Zulia, Vol. 39, No. 3, pp. 173–180.
4. Oliver Rockinger image fusion toolbox. [online] Available at: <http://www.metapix.de/toolbox.htm>.
5. Li, C., Ye, H., and Ye, J., 2016. Image fusion based on curvelet transform and principal component analysis. Revista Tecnica De La Facultad De Ingenieria Universidad Del Zulia, Vol. 39, No. 1, pp. 392–396.
6. Gan, L., 2007. Block compressed sensing of natural images. In: Proc. 15th International conference on digital signal processing, pp. 403–406.
7. Mun, S. and Fowler, J. E., 2009. Block compressed sensing of images using directional transforms. In: Proc. 16th IEEE international conference on image processing, pp. 3021–3024.
8. Haghighat, M.B.A., Aghagolzadeh, A. and Seyedarabi, H., 2011. A non-reference image fusion metric based on mutual information of image features. Computers & Electrical Engineering, Vol. 37, No. 5, pp. 744–756.
9. Wang, Z., Bovik, A.C., Sheikh, H.R. and Simoncelli, E. P., 2004. Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing, Vol. 13, No. 4, pp. 600–612.
10. Petrovic, V. and Xydeas, C., 2005. Objective image fusion performance characterization. In: Proc. 10th IEEE International Conference on Computer Vision, Vol. 2, pp. 1866–1871.
06_2016_Lili | |
2017-01-19 2.32 MB 886 |