Just Noticeable Difference Estimation Using Visual Saliency in Images

Message:
Article Type:
Research/Original Article (دارای رتبه معتبر)
Abstract:

Due to some physiological and physical limitations in the brain and the eye, the human visual system (HVS) is unable to perceive some changes in the visual signal whose range is lower than a certain threshold so-called just-noticeable distortion (JND) threshold. Visual attention (VA) provides a mechanism for selection of particular aspects of a visual scene so as to reduce the computational load on the brain. According to the current knowledge, it is believed that VA is driven by “visual saliency”. In a visual scene, a region is said to be visually salient if it possess certain characteristics, which make it stand out from its surrounding regions and draw our attention to it. In most existing researches for estimating the JND threshold, the sensitivity of the HVS has been consider the same throughout the scene and the effects of visual attention (caused by visual saliency) which have been ignored. Several studies have shown that in salient areas that attract more visual attention, visual sensitivity is higher, and therefore the JND thresholds are lower in those points and vice versa. In other words, visual saliency modulates JND thresholds. Therefore, considering the effects of visual saliency on the JND threshold seems not only logical but also necessary. In this paper, we present an improved non-uniform model for estimating the JND threshold of images by considering the mechanism of visual attention and taking advantage of visual saliency that leads to non-uniformity of importance of different parts of an image. The proposed model, which has the ability to use any existing uniform JND model, improves the JND threshold of different pixels in an image according to the visual saliency and by using a non-linear modulation function. Obtaining the parameters of the nonlinear function through an optimization procedure leads to an improved JND model. What make the proposed model efficient, both in terms of computational simplicity, accuracy, and applicability, are: choosing nonlinear modulation function with minimum computational complexity, choosing appropriate JND base model based on simplicity and accuracy and also Computational model for estimating visual saliency  that accurately determines salient areas, Finally, determine the Efficient cost function and solve it by determining the appropriate  objective Image Quality Assessment. To evaluate the proposed model, a set of objective and subjective experiments were performed on 10 selected images from the MIT database. For subjective experiment, A Two Alternative Forced Choice (2AFC) method was used to compare subjective image quality and for objective experiment SSIM and IWSSIM was used. The obtained experimental results demonstrated that in subjective experiment the proposed model achieves significant superiority than other existing models and in objective experiment, on average, outperforms the compared models. The computational complexity of proposed model is also analyzed and shows that it has faster speed than compared models.

Language:
Persian
Published:
Signal and Data Processing, Volume:17 Issue: 2, 2020
Pages:
71 to 84
magiran.com/p2170756  
دانلود و مطالعه متن این مقاله با یکی از روشهای زیر امکان پذیر است:
اشتراک شخصی
با عضویت و پرداخت آنلاین حق اشتراک یک‌ساله به مبلغ 1,390,000ريال می‌توانید 70 عنوان مطلب دانلود کنید!
اشتراک سازمانی
به کتابخانه دانشگاه یا محل کار خود پیشنهاد کنید تا اشتراک سازمانی این پایگاه را برای دسترسی نامحدود همه کاربران به متن مطالب تهیه نمایند!
توجه!
  • حق عضویت دریافتی صرف حمایت از نشریات عضو و نگهداری، تکمیل و توسعه مگیران می‌شود.
  • پرداخت حق اشتراک و دانلود مقالات اجازه بازنشر آن در سایر رسانه‌های چاپی و دیجیتال را به کاربر نمی‌دهد.
In order to view content subscription is required

Personal subscription
Subscribe magiran.com for 70 € euros via PayPal and download 70 articles during a year.
Organization subscription
Please contact us to subscribe your university or library for unlimited access!