Unifying Dual-Pyramid Structure and Y–G Channel Synergy for Full-Reference Image Quality Assessment

Authors

  • Ali Abdulazeez Mohammed Baqer Qazzaz University of Kufa Iraq
  • Yousif Samer Mudhafar The Islamic University Iraq
  • Siraj Muneer Mahboba University of Kufa Iraq

DOI:

https://doi.org/10.26877/asset.v8i1.2783

Keywords:

Dual-Pyramid Architecture, Full-Reference Image Quality Assessment, Human Visual System, Luminance–Green Synergy, Synergistic Structural Similarity Index, Y–G fusion

Abstract

Standard metrics such as SSIM often overlook complex chromatic distortions, creating a gap between objective scores and human judgment. To address this, we present the Synergistic Structural Similarity Index (SSSI), a metric grounded in a novel dual-pyramid strategy that integrates Gaussian-blurred stability with direct subsampling sharpness. Our method departs from luminance-only analysis by employing an equal, synergistic partnership between the luminance (Y) and Green (G) channels, mirroring the eye's spectral sensitivity. On the KADID-10k dataset, SSSI achieves an SROCC of 0.7793. This represents a significant 4% performance gain over the standard SSIM baseline, demonstrating that integrating chromatic data with dual-scale structural analysis provides a more accurate proxy for human visual perception.

Author Biographies

  • Ali Abdulazeez Mohammed Baqer Qazzaz, University of Kufa

    Department of Computer Science, Faculty of Education, University of Kufa, Najaf, Iraq

  • Yousif Samer Mudhafar, The Islamic University

    Department of Computer Science, Faculty of Education, University of Kufa, Najaf, Iraq

    Department of Computer Techniques Engineering, Faculty of Technical Engineering, The Islamic University, Najaf, Iraq

  • Siraj Muneer Mahboba, University of Kufa

    Department of Computer Science, Faculty of Education, University of Kufa, Najaf, Iraq

    Faculty of Computing, University Technology of Malaysia, Johor Bahru, Skudai, Johor, Malaysia

References

[1] A. A. M. B. Qazzaz and Y. S. Mudhafar, “Generating detection labels from class-level explanations for deep learning-based eye disease diagnosis,” Journal of Innovative Image Processing, vol. 7, no. 4, pp. 1229–1246, Oct. 2025, doi: 10.36548/jiip.2025.4.008.

[2] L. Wang, “A survey on image quality assessment,” arXiv preprint arXiv:2109.00347, 2022, https://doi.org/10.48550/arXiv.2109.00347.

[3] J. Al-Asady, H. et al., “An image encryption method based on logistical chaotic maps to encrypt communication data,” Kufa J. Eng., vol. 15, no. 4, pp. 55–64, Nov. 2024, doi: 10.30572/2018/KJE/150405.

[4] S. Jamil, “Review of image quality assessment methods for compressed images,” J. Imaging, vol. 10, no. 113, 2024, https://doi.org/10.3390/jimaging10050113.

[5] A. A. M. B. Qazzaz, H. T. Hussein, S. J. Al-janabi and Y. Mudhafar, “Generative adversarial network for intelligent haze removal from high quality images,” International Journal of Advances in Applied Sciences (IJAAS), vol. 14, no. 4, pp. 1340–1349, 2025, doi: 10.11591/ijaas.v14.i4.pp1340-1349.

[6] R. Padmapriya and A. Jeyaseka, “Blind image quality assessment with image denoising: A survey,” Journal of Pharmaceutical Negative Results, vol. 13, no. S03, 2022.

[7] Y. Liu, Y. Tian, S. Wang, X. Zhang, and S. Kwong, “Overview of high-dynamic-range image quality assessment,” J. Imaging, vol. 10, no. 243, 2024, https://doi.org/10.3390/jimaging10100243.

[8] Y. Al Najjar, “Comparative analysis of image quality assessment metrics: MSE, PSNR, SSIM and FSIM,” International Journal of Science and Research (IJSR), vol. 13, no. 3, pp. 1–6, 2024.

[9] X. Sui, S. Wang, and Y. Fang, “A survey on objective quality assessment of omnidirectional images,” in Proc. Asia Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), 2024, 10.1109/APSIPAASC63619.2025.10849147.

[10] K. Okarma, P. Lech, and V. V. Lukin, “Combined full-reference image quality metrics for objective assessment of multiply distorted images,” Electronics, vol. 10, no. 2256, 2021.

[11] M. M. Saad, R. O’Reilly, and M. H. Rehmani, “A survey on training challenges in generative adversarial networks for biomedical image analysis,” arXiv preprint arXiv:2201.07646, 2023, https://doi.org/10.1007/s10462-023-10624-y.

[12] A. Ganijon and D. M. Xoshimxonovich, “Advancements in image quality assessment: A comprehensive survey,” Raqamli Transformatsiya va Sun’iy Intellekt, vol. 2, no. 5, 2024.

[13] M. Arabboev, S. Begmatov, M. Rikhsivoev, K. Nosirov, and S. Saydiakbarov, “A comprehensive review of image super-resolution metrics: classical and AI-based approaches,” Acta IMEKO, vol. 13, no. 1, pp. 1–8, 2024, https://doi.org/10.21014/actaimeko.v13i1.1679.

[14] M. A. Zubkov and R. A. Abramchuk, “Effect of interactions on the topological expression for the chiral separation effect,” arXiv preprint arXiv:2301.12261, 2025, https://doi.org/10.1103/PhysRevD.107.094021.

[15] K. Ding, K. Ma, S. Wang, and E. P. Simoncelli, “Image quality assessment: Unifying structure and texture similarity,” IEEE Trans. Pattern Anal. Mach. Intell., arXiv preprint arXiv:2004.07728, 2020, https://doi.org/10.1109/TPAMI.2020.3045810.

[16] R. Zhang, P. Isola, A. A. Efros, E. Shechtman, and O. Wang, “The unreasonable effectiveness of deep features as a perceptual metric,” arXiv preprint arXiv:1801.03924, 2018.

[17] R. Reisenhofer, S. Bosse, G. Kutyniok, and T. Wiegand, “A Haar wavelet-based perceptual similarity index for image quality assessment,” arXiv preprint arXiv:1607.06140, 2017, https://doi.org/10.1016/j.image.2017.11.001.

[18] S. Das and N. Gupte, “Dynamics of impurities in a three-dimensional volume-preserving map,” arXiv preprint arXiv:1406.4344, 2014, https://doi.org/10.1103/PhysRevE.90.012906.

[19] S. Bandyapadhyay and K. Varadarajan, “Approximate clustering via metric partitioning,” arXiv preprint arXiv:1507.02222, 2016, https://doi.org/10.48550/arXiv.1507.02222.

[20] C.-Z. Lee, L. P. Barnes, and A. Ozgur, “Over-the-air statistical estimation,” arXiv preprint arXiv:2103.04014, 2021, https://doi.org/10.1109/JSAC.2021.3118412.

[21] H. Lin, V. Hosu, and D. Saupe, “KADID-10k: A large-scale artificially distorted IQA database,” in Proc. 11th Int. Conf. on Quality of Multimedia Experience (QoMEX), 2019, pp. 1–3, https://doi.org/10.1109/QoMEX.2019.8743252.

[22] X. Liu, Z. Zhang, and Y. Wang, "Underwater image quality assessment method based on color space multi-feature fusion," Scientific Reports, vol. 13, art. no. 16688, 2023, https://doi.org/10.1038/s41598-023-44179-3.

[23] Y. Zhang, H. Liu, and K. Ma, "Research progress on color image quality assessment," Sensors, vol. 24, no. 5, p. 1588, 2024.

[24] Y. Zheng, T. Jiang, and Y. Wang, "Full-reference image quality assessment based on multi-channel visual information fusion," Applied Sciences, vol. 13, no. 15, p. 8760, 2023, https://doi.org/10.3390/app13158760.

[25] M. M. Pazouki, O. Toygar, and M. Hosseinzadeh, "New combined metric for full-reference image quality assessment," Symmetry, vol. 16, no. 12, p. 1622, 2024, https://doi.org/10.3390/sym16121622.

Downloads

Published

2026-01-30