Abstract
The main aim of Objective image quality assessment (IQA) is to evaluate image quality consistently with human perception. We have different types of perceptual IQA metrics but they cannot accurately represents the degradations from different types of distortions, e.g., existing structural similarity metrics perform well on content dependent distortions and gives the better peak signal-to-noise ratio (PSNR) but it is not well on content-independent distortions. In this paper, we integrate the merits of the existing IQA metrics with the guide of the recently revealed internal generative mechanism (IGM). The IGM indicates that the human visual system actively predicts sensory information and tries to avoid residual uncertainty for image perception and understanding. Motivated by the IGM theory, here we assume an autoregressive prediction algorithm to decompose an input scene into two portions, the predicted portion with the predicted visual content and the disorderly portion with the residual content. Distortions on the predicted portion causes to degrade the primary visual information, and structural similarity procedures are employed to measure its degradation; distortions on the disorderly portion mainly change the uncertain information and the PNSR is employed for it. Based on the noise energy deployment on the two portions, finally we mix the two evaluation results to acquire the overall quality score. Simulation results show better performance comparable with the state-of-the-art quality metrics.