留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

简单光学系统的宽光谱点扩散函数估计

郑云达 黄玮 潘云 徐明飞 贾树强 张晓菲 卢勇男

郑云达, 黄玮, 潘云, 徐明飞, 贾树强, 张晓菲, 卢勇男. 简单光学系统的宽光谱点扩散函数估计[J]. 中国光学, 2019, 12(6): 1418-1430. doi: 10.3788/CO.20191206.1418
引用本文: 郑云达, 黄玮, 潘云, 徐明飞, 贾树强, 张晓菲, 卢勇男. 简单光学系统的宽光谱点扩散函数估计[J]. 中国光学, 2019, 12(6): 1418-1430. doi: 10.3788/CO.20191206.1418
ZHENG Yun-da, HUANG Wei, PAN Yun, XU Ming-fei, JIA Shu-qiang, ZHANG Xiao-fei, LU Yong-nan. Wide-spectrum PSF estimation for simple optical system[J]. Chinese Optics, 2019, 12(6): 1418-1430. doi: 10.3788/CO.20191206.1418
Citation: ZHENG Yun-da, HUANG Wei, PAN Yun, XU Ming-fei, JIA Shu-qiang, ZHANG Xiao-fei, LU Yong-nan. Wide-spectrum PSF estimation for simple optical system[J]. Chinese Optics, 2019, 12(6): 1418-1430. doi: 10.3788/CO.20191206.1418

简单光学系统的宽光谱点扩散函数估计

doi: 10.3788/CO.20191206.1418
基金项目: 应用光学国家重点实验室资助
详细信息
  • 中图分类号: TP394.1;TH691.9

Wide-spectrum PSF estimation for simple optical system

Funds: Supported by the State Key Laboratory of Applied Optics
More Information
    Author Bio:

    ZHENG Yunda (1992—), Ph.D., male, from Yanbian, Jilin, obtained a bachelor's degree from the University of Science and Technology of China in 2014, he is mainly engaged in research in image restoration and optical design.E-mail:yundazheng@foxmail.com

    HUANG Wei (1965—), researcher and doctoral tutor, male, from Jilin Changchun, he is mainly engaged in the research of optical systems design.E-mail:huangw@ciomp.ac.cn

    Corresponding author: HUANG Wei, E-mail:huangw@ciomp.ac.cn
图(10) / 表 (2)
计量
  • 文章访问数:  324
  • HTML全文浏览量:  81
  • PDF下载量:  6
  • 被引次数: 0
出版历程
  • 收稿日期:  2019-01-10
  • 修回日期:  2019-03-09
  • 刊出日期:  2019-12-01

简单光学系统的宽光谱点扩散函数估计

doi: 10.3788/CO.20191206.1418
    基金项目:  应用光学国家重点实验室资助
  • 中图分类号: TP394.1;TH691.9

摘要: 为了准确获取简单光学系统的点扩散函数(PSF),提升图像复原质量,本文提出了一种基于PSF测量的宽光谱PSF估计方法。首先,测量了光学系统的窄带PSF,并结合图像匹配算法,标定了实际光学系统中的探测器位置和光轴中心偏移。然后,模拟实际光学系统各波长、各视场的PSF,再结合目标反射光谱和探测器光谱敏感信息计算实际目标的宽光谱PSF。实验结果表明:本文提出的PSF估计方法明显优于窄带PSF估计和盲估计方法,复原图片质量和稳定性均有明显提升,能够准确估计实际光学成像系统的PSF。

English Abstract

郑云达, 黄玮, 潘云, 徐明飞, 贾树强, 张晓菲, 卢勇男. 简单光学系统的宽光谱点扩散函数估计[J]. 中国光学, 2019, 12(6): 1418-1430. doi: 10.3788/CO.20191206.1418
引用本文: 郑云达, 黄玮, 潘云, 徐明飞, 贾树强, 张晓菲, 卢勇男. 简单光学系统的宽光谱点扩散函数估计[J]. 中国光学, 2019, 12(6): 1418-1430. doi: 10.3788/CO.20191206.1418
ZHENG Yun-da, HUANG Wei, PAN Yun, XU Ming-fei, JIA Shu-qiang, ZHANG Xiao-fei, LU Yong-nan. Wide-spectrum PSF estimation for simple optical system[J]. Chinese Optics, 2019, 12(6): 1418-1430. doi: 10.3788/CO.20191206.1418
Citation: ZHENG Yun-da, HUANG Wei, PAN Yun, XU Ming-fei, JIA Shu-qiang, ZHANG Xiao-fei, LU Yong-nan. Wide-spectrum PSF estimation for simple optical system[J]. Chinese Optics, 2019, 12(6): 1418-1430. doi: 10.3788/CO.20191206.1418
    • Optical system aberrations are a common cause for image degradation. In order to compensate for the aberrations and make the image clear, researchers usually use such a large number of lenses and expensive materials that causes optical systems to be bulky and costly.

      Simple optical system imaging is an emerging technology for simplifying optical systems using computational imaging. This technique first relaxes the constraints of the optical design of the system by designing a simple optical system as the front end and leaving residual aberrations. Then, at the back end, a spatially varying deconvolution algorithm is performed on the blurred image obtained by the simple optical system to reduce the optical blurs. Meanwhile, it can simplify the optical system.

      In this technique, the point spread function(PSF) of the optical system is used as the convolution kernel in the deconvolution algorithm, which is an important factor affecting the recovery result. PSF represents the impulse response of an optical system. Its Fourier transform is an optical transfer function(OTF) representing the response of the system in the frequency domain. If the PSF is inaccurate, it is prone to cause severe ringing effects that ultimately affect image quality.

      The inclined edge method is a widely used PSF acquisition method[1-2]. It captures sub-images with inclined edges in the image and uses fitting, interpolation and differentiation to obtain the line spread function of the sub-image, subsequently to calculate the PSF of the system. However, this method assumes that the PSF is Gaussian. The PSF of the real system is much more complicated than Gaussian, so the limitations of this method are obvious. The blind deconvolution algorithm uses prior knowledge to estimate the PSF and clear image of the optical system directly from the blurred image[3-6]. However, the PSF of the optical system is spatially varying and needs to be processed in blocks. The amount of information in a small block of the blurred picture is very limited and blind deconvolution itself is an ill-conditioned problem. Thus, the restoration result is not reliable. In order to overcome these problems, some calibration methods using blurred/clear image pairs have been proposed[7-9]. These methods produce a calibration plate with distinct feature information in the full field of view. A simple optical system is then used to capture the blurred image of the calibration plate and then deconvolution calculation is carried out on the blurred image and the synthesized or acquired clear calibration plate image to obtain spatially varying PSFs. The direct measurement method is the most intuitive PSF acquisition method because the target board with the point source array is directly captured by the imaging system[10]. However, this method is subject to interference from sensor noise[11]. To reduce the impact of noise on PSF acquisition, researchers built a mathematical model to fit the measured PSF and then use the fitting results to reconstruct the noise-free interference PSF[12-14]. This method is limited by mathematical models and may result in large errors in fitting results. Shih et al.[15] calibrate the tolerance of the optical system through analysis of the original PSF and established PSF, furthermore, they modify the model to simulate the PSF of the real optical system, effectively avoiding the influence of measurement noise.

      However, all of these aforementioned methods overlook the important fact that image sensor are typically broadband, which means that the filter in front of the sensor allows light of a wider spectral range to pass. Thus, the spectra of the incident light in the optical system and that received by the sensor vary with the target. On the other hand, as the PSF of the imaging system is closely related to the wavelength of the incident light, it is difficult to determine the PSF of the real target by simple measurement or calibration. It is necessary to comprehensively consider the information of the target reflectance spectrum and the spectral sensitivity of the sensor. In this paper, a wide-spectral PSF estimation method for simple optical systems using PSF measurement is proposed. The sensor′s position and the optical-axis deviation of the real optical system are calibrated by measuring narrow-band PSF and image matching method. Then, the PSF of each field of view and each wavelength of the real optical system are simulated and the PSF of the simple optical system is calculated in combination with the target reflectance spectrum and the spectral sensitivity of the sensor. This method is advantageous for its ability to directly measure the PSF, effectively reduce the interference of the sensor noise through simulation, accurately estimate the wide-spectrum PSF of the real simple optical system, avoiding the ringing effect caused by incorrect estimation of the PSF and improving the stability of the restored image.

    • The PSF is the light intensity distribution of a point target formed on the image plane through an optical system. In an ideal optical imaging system, the light emitted from the target is focused to form an Airy spot after it passes the optical system. In the real system, due to the existence of optical aberrations, the light emitted from the point target cannot be focused well, forming a larger size diffuse spot and resulting in image blur. Moreover, the light with different wavelengths has different refractive indices in the optical material and the deflection of the light is also different, so the PSF also changes with the wavelength of the incident light. For diffuse objects, light is reflected by the point target into the optical system and projected on the image sensor. Assuming that the reflectance spectrum of the target is locally consistent, the obtained blurred image can be regarded as the convolution result of the real clear image and the PSFs of the optical system. The imaging model can be expressed as:

      (1)

      where λ represents the wavelength of the incident light, x and y represent the image coordinates, b is the blurred image, r is the normalized reflectance spectrum of the target, i is the true sharp image, s is the spectral sensitivity of the sensor, k is the PSF of the single-wavelength light of the optical system, n is the sensor noise and h is the PSF of the real point target. Therefore, the blur kernel of the real optical system, that is the PSF of the real point target, can be represented by the spectral sensitivity of the sensor, the normalized target reflectance spectrum and the single-wavelength PSF as follow:

      (2)
    • Equation (2) indicates that to calculate the wide-spectrum PSF of the real object point, we need to know the PSF of any single-wavelength light. However, measuring the PSF of any single-wavelength light of any field of view will consume a lot of manpower and material resources, and the results will be highly susceptible to sensor noise. In this paper, two sets of narrow-band spatially varying PSF are measured and the image matching algorithm is combined with this to calibrate the machining errors affecting PSF in the real optical system, which are sensor′s position error and the deviation of the optical axis, and the simulated model of the optical system is corrected so that the simulated PSF is closer to the PSF of the real optical system. Optical design software is then used to intensively simulate the spatially varying PSF of each wavelength to accurately obtain a single wavelength optical PSF. Finally, the wide-spectral PSF is obtained by weighted calculation of the acquired spectral information. A flow chart of the proposed method is shown in Fig. 1.

      图  1  本文提出的宽光谱PSF估计的流程图

      Figure 1.  Flow chart of the proposed wide-spectrum PSF estimation

    • The experimental setup for measuring the narrow-band PSF is shown in Fig. 2, which consists of an LED source, two narrow-band filters(650 nm and 532 nm), an optical pinhole and a self-designed simple optical system.

      图  2  窄带PSF测量的实验装置示意图

      Figure 2.  Schematic diagram of experimental step for narrow-band PSF measurement

      After the object distance is fixed, adjusting the position of the sensor until the sensor can receive a clear image, then fixing the sensor. A narrow-band point source consisting of a light source, an optical pinhole and a narrow-band filter is captured by the sensor. This serves as a measurement of a narrow-band PSF. The point source is moved perpendicularly to the optical axis to measure narrow-band PSFs in different fields. The experiment is carried out in a dark room to reduce the interference of unexpected light. It is also important to pay attention to controlling the exposure time during PSF acquisition to avoid the saturation of the measured PSF intensity value.

    • In a real optical system, the assembly error causes the point of intersection of the optical axis and the sensor to deviate from the sensor center. Therefore, the PSF of the real optical system is not usually circularly and symmetrically distributed about the center of the image. Ignoring this error can lead to mismatching of the field of view of the simulated PSF, which seriously affects the accuracy of the PSF estimation.

      The PSF of the off-axis object point in the optical system is symmetrical on the meridional plane, and as such, the axis of symmetry of the PSF of the off-axis object point must pass through the point of intersection of the optical axis and the sensor. After finding all the symmetry axes of the measured PSF, the least-squares method is used to calculate the closest point in the image to these symmetry axes, which is the calibrated optical center.

    • The position of the sensor of the real optical system is usually uncertain and positional error of the sensor is difficult to avoid, which leads to a certain degree of defocusing of the real system relative to the design system. Ignoring this defocusing will cause serious error to the PSF simulation. In this paper, the position of the sensor in the real optical system is calibrated by matching the simulated PSFs of different sensor positions with the measurement PSF. First, the field of the measured PSF is matched with the simulated PSF. Then, a set of PSFs in different fields of view is generated using optical design software. The field of the measured PSF is calculated based on the optical axis deviation, as previously calibrated. An acceptable field matching error is set. If the field difference of the measured PSF and its nearest simulated PSF is less than the field-matching error, the measured PSF and the simulated PSF are considered to be in the same field of view. The difference greater than the field-matching error means that no simulated PSF is in the same field of view as the measured PSF, and the measured PSF of this field should be discarded. After the field is matched, the simulated PSFs of different sensor positions in the same field are matched with the measured PSF. Although the measured PSF is seriously affected by the noise of the sensor, its size and shape hardly change and the intensity distribution is roughly the same as the real PSF. Because of this, this paper uses the template matching method to process matching by taking the maximum value of the normalized cross-correlation matrix between the simulated PSF and the measured PSF as the matching degree[16]. The normalized cross-correlation matrix can be expressed as:

      (3)

      where w is the measured PSF, f is the simulated PSF, w is the average of w, and fxy is the average of the area coincident with w in f. The range of γ(x, y) is [-1, 1]. The larger the value, the higher the matching degree of f and w. When the normalized f is the same as w, the γ value reaches 1. We took the position of the sensor with the highest matching degree as the calibration sensor position. At this position, the simulated PSF was closest to the real measured PSF.

      For all wavelengths and fields, the measured PSF and the simulated PSF corresponding to the calibrated sensor position should have the highest similarity. Therefore, the average of all wavelengths and matching fields is used as the final match to further reduce the impact of noise.

    • The back focal length and the image size of the optical system are adjusted according to the calibrated sensor position and the optical axis deviation. The PSF of the real optical system at each wavelength is densely simulated using the two-dimensional image simulation function of CODE V. After obtaining the spectral sensitivity of the sensor and the target reflectance spectrum, the simulated single-wavelength PSFs are weighted calculated to generate a wide-spectrum PSF.

    • In order to verify the accuracy of the proposed PSF estimation method, we use a self-designed simple optical system to take blurred images. The blurred images are restored using the proposed PSF estimation method, the blind estimated PSF[3] and the single-wavelength PSF, and the results are compared.

      First, a self-designed simple camera is used to capture the blurred image of the target board and perform the PSF measurement. The configuration and parameters of the self-designed simple camera are shown in Fig. 3 and Tab. 1. The simulated prototype of the camera is shown in Fig. 4.

      图  3  自制简单相机结构图

      Figure 3.  Configuration of the self-designed simple camera

      表 1  自制简单相机的镜头参数

      Table 1.  Lens parameters of the self-designed simple camera

      Surface Radius Thickness Glass
      Object Infinity Infinity
      1 31.84 4.00 HK9L_CDGM
      2 125.56 5.80
      stop Infinity 5.80
      4 21.29 3.50 HK9L_CDGM
      5 109.53 25.42
      Image Infinity 0

      图  4  自制简单相机的实物图

      Figure 4.  Prototype of self-designed simple camera

      The optical axis deviation and the sensor′s position in the optical system are calibrated using the measured PSF. The matching curve of the sensor′s position is shown in Fig. 5. The abscissa in Fig. 5 represents the position difference between the simulated sensor and designed one. The ordinate is the matching degree. When the abscissa is 1.34 mm, the matching degree reaches its highest, being 0.857 0. When the sensor is in this position, the matched simulated PSF is very close to the measured PSF for all measured fields. Fig. 6 shows a comparison of 8 sets of measured PSFs with their matched simulated PSFs.

      图  5  探测器位置匹配曲线

      Figure 5.  Sensor-position matching curve

      图  6  测量PSF与匹配的模拟PSF对比

      Figure 6.  Comparison of measured PSFs and matching simulated PSFs

      It can be seen that the 8 sets of measured PSFs and their matched simulated PSFs are very close in size and shape in comparison, but the measured PSF is affected by noise and some detailed information is lost. However, the PSF simulated after calibration of optical system is minimally affected by noise, its details are more abundant and it can more accurately represent the single-wavelength PSF.

      The target board′s reflectance spectrum is measured using fiber spectrometer, Ocean Optics USB4000 and is shown in Fig. 7.The spectral sensitivity of the sensor is then obtained by consulting the sensor′s technical data. Finally, the spatially varying wide-spectrum PSF of the real system is calculated using equation (2), as shown in Fig. 8.

      图  7  目标板反射光谱

      Figure 7.  Reflectance spectrum of target board

      图  8  实际成像系统空间变化的宽光谱PSF

      Figure 8.  Spatially varying wide-spectrum PSF of the real imaging system

      The captured blurred image is divided into 7×13 rectangular overlapping patches, and the patches are restored using the deconvolution method of Krishnan et al.[17] and the proposed wide-spectrum PSF. The restored patches are stitched together finally. For comparison, the patches are also restored using the blind estimation PSF[3] and the single-wavelength PSF. The deconvolution algorithm used is also the method of Krishnan et al.. Note that, the wavelength of the chosen single-wavelength PSF is 532 nm, which is the wavelength with the highest spectral sensitivity to the sensor. The comparison of the restoration results is shown in Fig. 9 and Fig. 10.

      图  9  “卫星”图像[18]复原结果对比。(a)模糊图;(b)Krishnan盲估PSF复原结果;(c)单波长PSF复原结果;(d)本文估计PSF的复原结果

      Figure 9.  Comparison of restored results for "satellite" image[18]. (a)Blurred image, (b)restored results of Krishnan′s blind-estimated PSF, (c)restored results of single-wavelength PSF and (d)restored results of proposed method

      图  10  “标靶”图像复原结果对比。(a)模糊图;(b)Krishnan盲估PSF复原结果;(c)单波长PSF复原结果;(d)本文估计PSF的复原结果

      Figure 10.  Comparison of restored results for target image. (a)Blurred image, (b)restored results of Krishnan′s blind-estimated PSF, (c)restored results of single-wavelength PSF and (d)restored results of proposed method

      Fig. 9(a) and 10(a) are the original blurred images captured by the self-designed simple camera with a resolution of 1 920×1 080. Fig. 9(b) and 10(b) are the restored results by the blind-estimated method. Severe ill-conditioned problems make it difficult for accurately PSF estimation, causing severe ringing effects and residual blur. The images near the patch joints are also unnatural. Fig. 9(c) and 10(c) are the restored results by single wavelength PSFs. Since the influence of the wide-spectrum is ignored, the PSF has errors and the result also has some ringing effect. Fig. 9(d) and 10(d) are the results by proposed method. Whether it is in the central field or in the marginal field, the image quality shows obvious improvement compared with the blurred image. There is almost no ringing effect and the image quality is stable.

      The grayscale mean gradient(GMG) of the image is calculated as a quantitative evaluation index of the restored image quality. GMG can reflect the contrast and detail of the image. The larger the value, the clearer the image and the better the quality of the restored image. The expression of GMG is as follows:

      (4)

      where M and N represent the number of pixels in the horizontal and vertical directions of the image, respectively. ΔIx and ΔIy represent the horizontal and vertical gradients of the images. The comparison results of Fig. 8 and Fig. 9 are shown in Tab. 2. It can be seen that the quality of the restored image obtained by proposed method is superior to the results that obtained by blind estimation PSF and the single wavelength PSF.

      表 2  图像灰度平均梯度对比

      Table 2.  Comparison of image grayscale mean gradients

      Satellite Target
      Blurred image 0.002 4 0.003 1
      Krishnan et al. 0.006 9 0.008 5
      Using 532 nm PSF 0.108 0 0.013 1
      Ours 0.109 0 0.014 1
    • In this paper, based on the wide spectral characteristics of the real optical imaging system, a calculation model for a real object PSF was established and a wide-spectral PSF estimation method based on PSF measurement was proposed. A measured PSF was used to calibrate the real optical system and then the PSF of the single-wavelength light of each field of view in the real system was simulated. Combined with the target reflectance spectrum and the spectral sensitivity information of the sensor, the wide-spectrum PSF was finally calculated. The experimental results showed that compared with the blind estimation PSF and single-wavelength optical PSF, the PSF estimated by the proposed method can significantly improve the quality and stability of the restored image. The proposed method can accurately estimate the PSF of the real optical imaging system.

      While calibrating the optical system machining error with the measured narrow-band PSF, only two kinds of error, the optical axis's center error and the deviation of the detector position are considered. This method is suitable for optical systems with low error tolerance.

      For less tolerant systems, more general calibration methods may be needed to estimate the tilt tolerance, the decentration tolerance, etc. of the system. Another possibility is that more sophisticated optical system processing adjustment techniques might be needed to reduce the impact of other errors on PSF estimation.

      ——中文对照版——

    • 光学系统像差是图像降质的重要原因。为了补偿像差,使图像更清晰,光学设计者会在设计光学系统时使用大量的镜片和昂贵的材料,这就导致光学系统体积笨重、成本高昂。

      简单光学系统成像是一种新兴的基于计算成像的简化光学系统的技术。该技术首先放宽了系统光学设计的约束,将前端的光学系统设计为一个有像差残余的简单光学系统,然后在后端对简单光学系统获取的模糊图像进行空间变化去卷积计算,以减小光学像差引起的模糊,其可在简化光学系统的同时,实现高像质成像。

      在这一技术中,作为去卷积计算的卷积核,即光学系统的点扩散函数(Point Spread Function, PSF),是影响复原结果的重要因素。PSF表示光学系统的脉冲响应,其傅立叶变换是光学传递函数(Optical Transfer Function, OTF),表示系统在频域的响应。如果PSF获取不准确,复原图像很容易产生严重的振铃效应,影响成像质量。

      倾斜刃边法是一个广泛使用的PSF获取方法[1-2],它抠取图像中具有倾斜刀刃边缘的子图像,用拟合、插值和微分的方法得到子图像的线扩散函数,进而计算系统的点扩散函数。不过该方法假设PSF是高斯型,而实际系统的PSF远比高斯型复杂,所以这种方法的局限性很大。盲去卷积算法利用先验知识直接由模糊图像估计光学系统的PSF和清晰图像[3-6]。然而光学系统的PSF是空间变化的,需要进行分块处理,小分块模糊图像的信息量十分有限,而盲去卷积本身又存在严重的病态问题,其复原结果并不可靠。为了解决盲去卷积算法局部信息不充分的问题,一些模糊/清晰图像对的标定方法被提出[7-9]。这些方法制作了全视场都具有鲜明特征信息的标定板,使用简单光学系统拍摄标定板的模糊图像,然后将其与合成或采集的清晰标定板图像进行去卷积计算,从而得到空间变化的PSF。直接测量法是最直观的PSF获取方法,即直接用成像系统拍摄带有点光源阵列的目标板[10],然而这种方法很容易受到探测器噪声的干扰[11]。为减弱噪声对PSF获取的影响,研究者们建立了数学模型对测量的PSF进行拟合,然后利用拟合结果重建无噪声干扰的PSF[12-14]。但这种方法受限于数学模型的准确性,建模不准确可能导致实际拟合结果存在较大误差。Shih等人[15]利用测量PSF与其所建立的PSF模型标定了光学系统的公差,然后对设计的光学系统进行修正,模拟生成了实际光学系统的PSF,有效避免了测量噪声的影响。

      然而,前面所述的方法都忽略了一个重要的事实,即图像探测器通常是宽带的,这就表示探测器前面的滤光片允许较宽光谱范围的光通过。于是,入射光学系统及被探测器接收到的光的光谱会随拍摄目标的变化而变化,而成像系统的PSF又与入射光的波长密切相关,因此很难通过简单的测量或标定确定实际目标的PSF,需要综合考虑目标反射光谱和探测器光谱敏感等信息。基于此,本文提出了一种基于PSF测量的简单光学系统宽光谱PSF估计方法。通过窄带PSF测量和图像匹配方法,确定实际光学系统的探测器位置和光轴中心的偏离。然后,模拟实际光学系统各视场、各波长的PSF,结合目标反射光谱和探测器光谱敏感度计算简单光学系统的PSF。该方法可直接测量PSF,通过模拟又可以有效减少探测器噪声的干扰,能准确估计实际简单光学系统的宽光谱PSF,其避免了PSF的错误估计而引起的振铃效应,提升了复原图像的质量及稳定性。

    • PSF用于描述点目标通过光学系统在像面上形成的光强分布。理想光学成像系统中,点目标所发出的光线经过光学系统后聚焦,形成艾里斑。而在实际系统中,由于光学像差的存在,点目标发出的光线不能聚焦,实际系统中形成的是尺寸更大的弥散斑,造成图像模糊。此外,不同波长的光折射率不同,光线的偏折也不同,因此PSF还会随着入射光波长的变化而变化。

      对于漫反射物体,光通过点目标反射进入光学系统,成像在图像传感器上。假设目标的反射光谱局部一致,最终获得的模糊图像可以看成是真实清晰图像与光学系统PSF卷积的结果,成像模型可表示为:

      (1)

      其中,λ表示入射光波长,xy表示图像坐标,b是模糊图像,r是归一化的目标反射光谱,i是真实清晰图像,s是探测器光谱敏感度,k是光学系统单一波长光的PSF,n表示探测器噪声,h是实际点目标的PSF。因此,实际光学系统的模糊核,即实际点目标的PSF可由探测器光谱敏感度、归一化的目标反射光谱和单波长光的PSF表示:

      (2)
    • 由公式(2)可知,要计算实际物点的宽光谱PSF,需要知道任意单波长光的PSF。而仅通过测量或标定等手段获得各视场任意单波长光的PSF会消耗大量的人力和物力,而且其结果还会受到探测器噪声的影响。针对这一问题,本文通过测量光学系统2组窄带空间变化的PSF,结合图像匹配算法标定实际光学系统中对PSF影响重大的加工误差,即探测器位置与光轴中心偏离,以修正光学系统,使模拟光学系统的PSF更接近实际光学系统的PSF。然后利用光学设计软件密集地模拟各波长空间变化的PSF,准确获取单波长光PSF。最终,利用获取的光谱信息,加权计算宽光谱PSF。本文方法的流程图如图 1所示。

    • 测量窄带PSF的实验装置如图 2,包括LED光源、2片窄带滤光片(650 nm和532 nm)、光学小孔和自制简单光学系统。

      固定物距,调整图像传感器位置,相机能够清晰成像后固定图像传感器的位置。用图像传感器采集由光源、光学小孔和窄带滤光片组成的窄带点光源的像,即为一次窄带PSF的测量。沿垂直于光轴方向移动点光源,测量不同视场的窄带PSF。实验过程要在暗室中进行,以减少其他杂光的干扰。在PSF采集时,还要注意控制曝光时间,以避免测量的PSF强度值溢出。

    • 实际光学系统中,图像传感器的装调误差会使光轴和图像传感器交点偏离图像传感器中心,因此实际光学系统的PSF通常不会以图像中心呈圆对称分布。忽略该误差会导致模拟PSF视场出现错误匹配,严重影响PSF估计的准确程度。

      光学系统中,轴外物点的PSF关于子午平面对称,因此,轴外物点PSF的对称轴必然通过光轴与图像传感器的交点。找到所有测量光学系统PSF的对称轴,利用最小二乘法计算图像中距离这些对称轴最近的点,该点即为标定的光轴中心点。

    • 实际光学系统的图像传感器位置通常是不确定的,图像传感器的位置误差难以避免,这就导致实际系统相对于设计系统会产生一定程度的离焦,忽略这个离焦将会使PSF模拟产生严重误差。本文通过不同图像传感器位置的模拟PSF与测量PSF匹配的方法标定实际光学系统的图像传感器位置。

      首先匹配测量PSF与模拟PSF的视场。使用光学设计软件模拟生成不同视场PSF的集合。根据前文标定的实际系统图像传感器的光轴中心点计算测量PSF的视场。设定一个可接受的视场匹配误差,当模拟PSF集合中视场最接近测量PSF的元素与测量PSF的视场差距小于设定的误差时,它们被视为处于同一视场;如果这一差距大于设定的误差,则意味着模拟PSF中没有与这个测量PSF处于同一视场的,应该舍弃这个视场的测量PSF。

      视场匹配后,对同一视场不同图像传感器位置的模拟PSF与测量PSF进行匹配。虽然测量PSF受图像传感器噪声影响严重,但是其尺寸、形状几乎不会改变,强度分布也与真实PSF大致相同。因此本文使用模板匹配方法对PSF进行匹配,以模拟PSF与测量PSF归一化互相关矩阵的最大值为匹配度[16]。模拟PSF与测量PSF的归一化互相关矩阵可表示为:

      (3)

      其中,w为测量PSF,f为模拟PSF,ww的平均值,fxyf中与w重合区域的平均值。γ(x, y)的值域为[-1, 1],值越大,fw的匹配程度越高。当归一化的fw相同时,γ值达到1。本文取能够使匹配度最高的图像传感器位置为标定的图像传感器位置,此时,模拟的PSF最接近实际测量的PSF。

      对于所有波长和视场,测量PSF及与其相匹配的图像传感器位置的模拟PSF都应具有最高的相似度,因此,本文将所有波长和视场匹配度的平均值作为最终匹配度,以进一步降低噪声的影响。

    • 根据标定的图像传感器位置和光轴中心偏离指标,调整设计光学系统的后截距及像面尺寸。使用光学设计软件CODE V的二维成像模拟功能密集地模拟各波长实际光学系统的PSF。

      在获取探测器光谱敏感度、目标反射光谱后,对模拟的单波长光PSF进行加权计算,最终生成宽光谱PSF。

    • 为了检验本文PSF估计方法的准确性,本文用自制的简单光学系统拍摄了模糊图像,并分别使用本文的PSF估计方法、盲估计的PSF算法[3]和单波长的PSF方法对模糊图像进行复原,并对得到的结果进行对比。

      首先,使用自制简单相机拍摄目标板,采集模糊图像,并进行PSF测量,自制简单相机的结构和参数分别如图 3表 1所示,相机的实物图如图 4所示。

      利用测量的PSF标定光学系统光轴中心偏离和探测器位置。探测器位置的匹配曲线如图 5所示。图 5的横坐标表示模拟的图像传感器位置与设计值的差,纵坐标为匹配度。当横坐标为1.34 mm时,匹配度达到最高,为0.857 0。当图像传感器处于这个位置时,对于所有测量视场,匹配的模拟PSF都与测量PSF非常接近。图 6展示了8组测量PSF及与之相匹配的模拟PSF的对比结果。可以看出,这8组测量PSF与匹配的模拟PSF在尺寸和形状上确实非常接近,但测量PSF受到噪声影响,细节信息丢失严重,而本文在光学系统标定后模拟生成的PSF受到噪声影响极小,细节更丰富,能够更准确地表示单波长光的PSF。

      使用Ocean Optics USB4000光纤光谱仪测量目标板反射光谱,见图 7。再通过查阅图像传感器的技术数据获得图像传感器的光谱敏感度。最终,利用公式(2)计算实际系统的空间变化的宽光谱PSF,如图 8所示。

      将采集的模糊图像分成7×13个矩形分块,使用Krishnan等人的去卷积方法[17]和本文估计宽光谱PSF方法复原分块图像,最后,对复原后的分块图像进行拼接。

      作为对比,本文也使用盲估计的PSF[3]和单波长的PSF对分块图像进行了复原,所使用去卷积算法同样是Krishnan等人的方法。其中,单波长PSF的波长为532 nm,是图像传感器光谱敏感度最高的波长。

      复原结果如图 9图 10所示。9(a)和10(a)为自制简单相机拍摄的原始模糊图,其分辨率为1 920×1 080。9(b)和10(b)为盲估计PSF去卷积结果,可见,严重的病态问题导致PSF难以准确估计,因此产生了严重的振铃效应及残余模糊,分块接缝处也出现了明显的痕迹。9(c)和10(c)为使用单波长PSF复原的结果,由于未考虑宽光谱的影响,PSF同样存在误差,其结果也出现了一定程度的振铃效应。9(d)和10(d)为本文方法的处理结果,可见,无论是中心视场还是边缘视场,图像质量较模糊图像都有明显提高,几乎没有振铃效应,像质稳定。

      计算图像的灰度平均梯度(Grayscale Mean Gradient, GMG)作为复原图像清晰度的定量评价指标。GMG能够反映图像的对比度和细节,数值越大表示图像越清晰,复原的图像质量越好。GMG的表达式如下:

      (4)

      式中,MN分别表示图像水平和竖直方向的像素数,ΔIx和ΔIy表示图像水平和竖直方向的梯度。图 8图 9的评价结果见表 2,可以看出,使用本文方法获取的PSF复原图像的质量优于使用盲估计PSF和单波长PSF的结果。

    • 本文根据实际光学成像系统的宽光谱特性,建立了实际物点PSF的计算模型,并由此提出了一种基于PSF测量的简单光学系统宽光谱PSF估计方法。利用测量的PSF标定实际光学系统,然后模拟实际系统各视场单波长光的PSF,结合目标反射光谱和图像传感器光谱敏感信息,最终计算出宽光谱PSF。实验结果证明,相较于盲估计PSF与单波长PSF方法,使用本文方法所估计的PSF能够明显提升复原图像质量和稳定性。本文方法能够准确估计实际光学成像系统的PSF。

      在利用测量的窄带PSF标定光学系统加工误差的过程中,本文仅考虑了两种误差,光轴中心偏移和图像传感器位置的偏离,这种方法适用于公差不敏感的光学系统。对于公差敏感的系统,可能还需要采取更通用的标定方法对系统的元件倾斜、偏心等误差进行估计,或使用更精密的光学系统加工装调技术以减小其他误差对PSF估计的影响。

参考文献 (18)

目录

    /

    返回文章
    返回