留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

Design of a model for shape from focus method

Ivana HAMAROVÁ Petr ŠMÍD Pavel HORVÁTH

HAMAROVÁ Ivana, ŠMÍD Petr, HORVÁTH Pavel. 聚焦形貌恢复方法模型设计[J]. 中国光学, 2016, 9(4): 439-451. doi: 10.3788/CO.20160904.0439
引用本文: HAMAROVÁ Ivana, ŠMÍD Petr, HORVÁTH Pavel. 聚焦形貌恢复方法模型设计[J]. 中国光学, 2016, 9(4): 439-451. doi: 10.3788/CO.20160904.0439
HAMAROVÁ Ivana, ŠMÍD Petr, HORVÁTH Pavel. Design of a model for shape from focus method[J]. Chinese Optics, 2016, 9(4): 439-451. doi: 10.3788/CO.20160904.0439
Citation: HAMAROVÁ Ivana, ŠMÍD Petr, HORVÁTH Pavel. Design of a model for shape from focus method[J]. Chinese Optics, 2016, 9(4): 439-451. doi: 10.3788/CO.20160904.0439

聚焦形貌恢复方法模型设计

doi: 10.3788/CO.20160904.0439
详细信息
  • 中图分类号: TH752

Design of a model for shape from focus method

Funds: 

the grant of the Czech Science Foundation No.13-12301S

More Information
    Corresponding author: Ivana Hamarová (1982—) , Ph.D., Institute of Physics of the Czech Academy of Sciences, Joint Laboratory of Optics of Palacky University and Institute of Physics AS CR. Her research interests are on numerical modeling and simulation of the optical fields and the proposal of the optical measuring sensors. E-mail:ivana.hamarova@upol.cz
  • 摘要: 采用基于拉普拉斯算符聚焦形貌恢复方法,提出了模拟目标深度测量的数值模型。数值模拟的核心是基于通过几何光学预测的理想图像的卷积与透镜广义孔径函数的多色点扩散函数,即用聚焦误差替代抛物线圆柱形貌或高斯函数。该模型可以使用基于聚焦形貌恢复方法的传感器真实组件参数、光源光谱、光学系统离差、相机的光谱灵敏度。提出了光学系统离差(消球差、消色差、色差)对确定目标表面形貌的精确度和可靠性的影响。结果表明,该模型可以有效提高实验效率,缩短时滞,降低成本。
  • Figure  1.  Detection of focused P′ and defocused P″ image of the object point P

    Figure  2.  The depth z (x, y) of the 3D object and intensity distribution I (x′, y′) corresponding to its ideal image. The image is divided into the limited regions A, B, …, C (gray areas) , in which the spatially invariant hi, j (x′, y′) is computed for j=1, 2, …, N, while N denotes number of the regions. The depth z (x, y) changes in discrete increments to, j=to (height of a single step) . For simplicity, we assume imaging 1: 1, therefore widths wo, j of individual steps are the same as the widths wi, j of the appropriate regions A, B, …, C

    Figure  3.  Scheme for simulation of the shape from focus method. The pyramidal object is placed at the distance d1 from the optical system. The image of the object produced by the optical system is observed at the distance d2 by means of the detector. Light source is situated at the distance s from the object

    Figure  4.  (a) A one-dimensional profile z (x) of the object under test and (d) a resulting intensity distribution I (x′) , computed by summation of (b) the intensity distribution I (x′) with (c) the fluctuation Ifluct (x′)

    Figure  5.  (a) An ideal case of the depth map z (x, y) (matrix of 451×451 pixels) of the object represented by the pyramid with 5 levels with total height 735 μm, and height of the individual level of the pyramid is 147 μm (b) a cross section of the depth map z (x, y) from (a) at a position y=225, (c) the depth map z (x, y) of the object acquired via simulation of the shape from focus method using the aberration-free optical system (d) a cross section of the depth map z (x, y) from (c) at a position y=225

    Figure  6.  (a) A depth map z (x, y) of the object under test acquired via simulation of the shape from focus method using the optical system with chromatic aberration (b) a cross section of the depth map z (x, y) at a position y=225

    Figure  7.  (a) A depth map z (x, y) of the object under test acquired via simulation of the shape from focus method using the achromatic optical system (b) a cross section of the depth map z (x, y) at a position y=225

    Figure  8.  Sum of modified Laplacian function F (i, j) computed by Eq. (14) for i=225, j=125 (the point on the third level of the pyramidal object) as a function of d1 for aberration-free (ideal) optical system, optical system with chromatic aberration and achromatic optical system. The total object height is 735 μm

    Figure  9.  (a) A one-dimensional profile z (x) of the object under test, (b) a depth map z (x, y) of the object under test acquired via simulation of the shape from focus method using the achromatic optical system (c) a cross section of the depth map z (x, y) at a position y=225. The total object′s height is 1470 μm

    Figure  10.  Sum of modified Laplacian function F (i, j) computed by Eq. (14) for i=225, j=125 (the point on the third level of the pyramidal object) as a function of d1 for achromatic optical system. The total object height is 1470 μm

  • [1] NAYAR S K,NAKAGAWA Y. Shape from focus[J]. IEEE,1994,16 (8) :824-831.
    [2] PERTUZ S,PUIG D,GARCIA M A. Reliability measure for shape from focus[J]. Image Vis. Comput.,2013,31 (10) :725-734.
    [3] MAHMOOD M T,SHIM S,CHOI T S. Depth and image focus enhancement for digital cameras[C]. IEEE 15th International Symposium on Consumer Electronics,Singapore,2011:50-53.
    [4] PERTUZ S,PUIG D,GARCIA M A. Analysis of focus measure operators for shape from focus[J]. Pattern Recognit.,2013,46 (5) :1415-1432.
    [5] SUBBARAO M. Direct recovery of depth-map I:differential methods[C]. IEEE Computer Society Workshop on Computer Vision,Miami Beach,Florida,USA,1987:58-65.
    [6] RAVIKUMAR S,THIBOS L N,BRADLEY A. Calculation of retinal image quality for polychromatic light[J]. J. Opt Soc. Am. A,2008,25 (10) :2395-2407.
    [7] CLAXTON C D,STAUNTON R C. Measurement of the point-spread function of a noisy imaging system[J]. J. Opt. Soc. Am. A,2008,25 (1) :159-170.
    [8] TAKEDA M. Chromatic aberration matching of the polychromatic optical transfer function[J]. Appl. Opt.,1981,20 (4) :684-687.
    [9] MANDAL S. A novel technique for evaluating the polychromatic optical transfer function of defocused optical imaging systems[J]. Optik,2013,124 (17) :2627-2629.
    [10] BARNDEN R. Calculation of axial polychromatic optical transfer function[J]. Opt. Acta,1974,21 (12) :981-1003.
    [11] SUBBARAO M,LU M-C. Computer modeling and simulation of camera defocus[C]. Conference on Optics, Illumination, and Image Sensing for Machine Vision VII, Boston, Massachusetts,USA,1992,Proc. SPIE,1993,1822:110-120.
    [12] MOELLER M,BENNING M,SCHÖNLIEB C,CREMERS D. Variational Depth from Focus Reconstruction[J]. IEEE,2015,24 (12) :5369-5378.
    [13] SALEH B E A,TEICH M C. Fundamentals of Photonics[M]. New York:John Wiley & Sons,1991.
    [14] GOODMAN J W. Introduction to Fourier Optics[M]. New York:McGraw-Hill Book Co.,1968.
    [15] ATIF M. Optimal depth estimation and extended depth of field from single images by computational imaging using chromatic aberrations[D]. Heidelberg:Ruperto Carola Heidelberg University,2013.
    [16] HADJ S B,BLANC-FÉRAUD L. Modeling and removing depth variant blur in 3D fluorescence microscopy[C]. IEEE International Conference on Acoustics,Speech and Signal Processing,Kyoto,Japan,2012:689-692.
    [17] BARSKY B A,TOBIAS M J,CHU D-P,et al.. Elimination of artifacts due to occlusion and discretization problems in image space blurring techniques[J]. Graph. Models,2005,67 (6) :584-599.
    [18] ZHANG L,NAYAR S. Projection defocus analysis for scene capture and image display[J]. ACM Trans. Graph.,2006,25 (3) :907-915.
    [19] FURLAN W D,SAAVEDRA G,SILVESTRE E,et al.. Polychromatic axial behavior of aberrated optical systems:Wigner distribution function approach[J]. Appl. Opt.,1997,36 (35) :9146-9151.
    [20] CMOS Camera DCC1545M[EB/OL]. [2016-01-07].http://www.thorlabs.de/newgrouppage9.cfm?objectgroup_id=4024/.
    [21] Relative spectral power distribution of CIE Standard Illuminant D65[EB/OL]. [2016-01-07].http://files.cie.co.at/204.xls/.
    [22] Dispersion formula of glass N-BK7[EB/OL]. [2016-01-07].http://refractiveindex.info/?shelf=glass &book=BK7 & page=SCHOTT.
    [23] Mounted Achromatic Doublet AC127-075-A-ML[EB/OL]. [2016-01-07]. https://www.thorlabs.de/newgrouppage9.cfm?objectgroup_id=2696.
    [24] MADOU M J. Manufacturing Techniques for Microfabrication and Nanotechnology[M]. Boca Raton,Florida:CRC Press-Taylor & Francis Group,2011.
  • [1] JIANG Yi-yang, CHEN Yan, WANG Xu-dong, ZHAO Dong-yang, LIN Tie, SHEN Hong, MENG Xiang-jian, WANG Lin, WANG Jian-lu.  Fabrication and Optoelectronic Characterization of Suspended In2O3 Nanowire Transistors . 中国光学, 2021, 14(1): 1-10. doi: 10.37188/CO.2020-0062
    [2] 赵海琴, 王林香, 庹娟, 叶颖.  金属离子Bi3+掺杂Lu1-xO3: x%Ho3+荧光粉的发光性能 . 中国光学, 2020, 13(6): 1-9. doi: 10.37188/CO.2020-0222
    [3] YU Bai-hua, TIAN Zhi-hui, SU Dong-qi, SUI Yong-xin, YANG Huai-jiang.  Optical design of an ultra-short-focus projection system with low throw ratio based on a freeform surface mirror . 中国光学, 2020, 13(2): 363-371. doi: 10.3788/CO.20201302.0363
    [4] SONG Dong-sheng, ZHENG Yuan-lin, LIU Hu, HU Wei-xing, ZHANG Zhi-yun, CHEN Xian-feng.  Eigen generalized Jones matrix method . 中国光学, 2020, 13(3): 637-645. doi: 10.3788/CO.2019-0163
    [5] 庹娟, 叶颖, 赵海琴, 王林香.  Li+、Na+共掺(YxGdyLu1-x-y)2O3:0.5%Pr3+荧光粉的制备及发光特性研究 . 中国光学, 2019, 12(6): 1279-1287. doi: 10.3788/CO.20191206.1279
    [6] 王林香, 庹娟, 叶颖, 赵海琴.  Li+, Zn2+, Mg2+掺杂Lu2O3:Er3+荧光粉的制备及发光特性 . 中国光学, 2019, 12(1): 112-121. doi: 10.3788/CO.20191201.0112
    [7] 李晓晓, 李蕴乾, 汪欣, 杨艳民.  高灵敏度下转换光学测温材料:NaGd(WO4)2:Yb3+/Er3+ . 中国光学, 2019, 12(3): 596-605. doi: 10.3788/CO.20191203.0596
    [8] 郑云达, 黄玮, 潘云, 徐明飞, 贾树强, 张晓菲, 卢勇男.  简单光学系统的宽光谱点扩散函数估计 . 中国光学, 2019, 12(6): 1418-1430. doi: 10.3788/CO.20191206.1418
    [9] 杨晶, 龚诚, 赵佳宇, 田浩琳, 孙陆, 陈平, 林列, 刘伟伟.  利用3D打印技术制备太赫兹器件 . 中国光学, 2017, 10(1): 77-85. doi: 10.3788/CO.20171001.0077
    [10] 张敏, 刘畅, 任博, 严凯, 陈长军, 王晓南.  3D打印激光制备多孔镍合金组织和力学性能研究 . 中国光学, 2016, 9(3): 335-341. doi: 10.3788/CO.20160903.0335
    [11] 许廷发, 罗璇, 苏畅, 卞紫阳.  多帧距离选通图像点扩散函数估计的超分辨率重建 . 中国光学, 2016, 9(2): 226-233. doi: 10.3788/CO.20160902.0226
    [12] 段佩华, 张继森, 张立国, 任建岳, 骆永石, 吕少哲.  YAG粉末材料中Cr3+敏化的Yb3+近红外发光性质 . 中国光学, 2015, 8(4): 603-607. doi: 10.3788/CO.20150804.0603
    [13] 曾蔚, 王汇源, 刘莹奇, 王斌, 张振铎, 曾子晗.  基于IR-SFS算法空间目标红外影像3D重建 . 中国光学, 2014, 7(3): 376-388. doi: 10.3788/CO.20140703.0376
    [14] 曹雷, 陈洪斌, 邱琪, 张建林, 任戈, 徐智勇, 张彬.  盲图像复原研究现状 . 中国光学, 2014, 7(1): 68-78. doi: 10.3788/CO.20140701.068
    [15] 时光, 梅林, 张立超.  球面元件表面AlF3薄膜的光学特性和微观结构表征 . 中国光学, 2013, 6(6): 906-911. doi: 10.3788/CO.20130606.906
    [16] 卢歆, 田坚.  Yb3+∶Y2O3超细粉体的低温燃烧法合成及发光性能 . 中国光学, 2011, 4(6): 667-671.
    [17] 李慧, 杨魁胜, 祁宁, 左周.  Yb3+/Er3+ 掺杂氟氧化物微晶玻璃的制备与发光性能 . 中国光学, 2011, 4(6): 672-677.
    [18] 孟庆季, 张续严, 周凌, 王超.  机载激光3D探测成像系统的关键技术 . 中国光学, 2011, 4(4): 327-339.
    [19] 张健, 张雷, 曾飞, 王旭, 赵嘉鑫, 田海英, 任航, 李俊峰.  机载激光3D探测成像系统的发展现状 . 中国光学, 2011, 4(3): 213-232.
    [20] 苑冰冰, 刘艳红, 丛妍, 李斌.  基于磷光配合物Re(CO)3Cl-dipyrido[3,2-a:2',3'-c] phenazine有机光伏器件 . 中国光学, 2009, 2(3): 225-229.
  • 加载中
图(10)
计量
  • 文章访问数:  646
  • HTML全文浏览量:  106
  • PDF下载量:  792
  • 被引次数: 0
出版历程
  • 收稿日期:  2016-03-14
  • 修回日期:  2016-04-29
  • 刊出日期:  2016-08-01

Design of a model for shape from focus method

doi: 10.3788/CO.20160904.0439
    基金项目:

     No.13-12301S

    通讯作者: Ivana Hamarová (1982—) , Ph.D., Institute of Physics of the Czech Academy of Sciences, Joint Laboratory of Optics of Palacky University and Institute of Physics AS CR. Her research interests are on numerical modeling and simulation of the optical fields and the proposal of the optical measuring sensors. E-mail:ivana.hamarova@upol.cz
  • 中图分类号: TH752

摘要: 采用基于拉普拉斯算符聚焦形貌恢复方法,提出了模拟目标深度测量的数值模型。数值模拟的核心是基于通过几何光学预测的理想图像的卷积与透镜广义孔径函数的多色点扩散函数,即用聚焦误差替代抛物线圆柱形貌或高斯函数。该模型可以使用基于聚焦形貌恢复方法的传感器真实组件参数、光源光谱、光学系统离差、相机的光谱灵敏度。提出了光学系统离差(消球差、消色差、色差)对确定目标表面形貌的精确度和可靠性的影响。结果表明,该模型可以有效提高实验效率,缩短时滞,降低成本。

English Abstract

HAMAROVÁ Ivana, ŠMÍD Petr, HORVÁTH Pavel. 聚焦形貌恢复方法模型设计[J]. 中国光学, 2016, 9(4): 439-451. doi: 10.3788/CO.20160904.0439
引用本文: HAMAROVÁ Ivana, ŠMÍD Petr, HORVÁTH Pavel. 聚焦形貌恢复方法模型设计[J]. 中国光学, 2016, 9(4): 439-451. doi: 10.3788/CO.20160904.0439
HAMAROVÁ Ivana, ŠMÍD Petr, HORVÁTH Pavel. Design of a model for shape from focus method[J]. Chinese Optics, 2016, 9(4): 439-451. doi: 10.3788/CO.20160904.0439
Citation: HAMAROVÁ Ivana, ŠMÍD Petr, HORVÁTH Pavel. Design of a model for shape from focus method[J]. Chinese Optics, 2016, 9(4): 439-451. doi: 10.3788/CO.20160904.0439
    • The shape from focus method[1-4] is the technique used in the image processing for the obtaining depth-maps of the object. The principle of the method is based on the relation among the object distance, focal distance of the lens, and the image distance, which is given by the Gaussian lens law. In terms of geometrical optics, each point in the object plane is projected onto a single point in the image plane, and the focused image is obtained. However, in terms of wave optics, which involves wave character of light, a focused image “point” is no longer a point, but rather a spot. When the detection plane is displaced from the image plane, the defocused (blurred) image is obtained. During the measuring procedure, an images sequence of the same scene of the object under investigation is acquired by moving the object along the optical axis. The depth of the object is determined through searching for the position of the object where every object point is imaged sharply. For the determination of the focused image at each image point, the Sum-Modified Laplacian (SML) operator to the images sequence is applied[1].

      The imaging performance of an optical system (image defocusation) is described by the convolution of the ideal image intensity (predicted by the geometrical optics) with the Point Spread Function (PSF) [1, 5-7]. The convolution computation is often performed by the inverse Fourier transform of the product of the Fourier transform of both the ideal image and the PSF. However, the condition of the spatially invariant PSF has to be fulfilled. In the frequency domain, the Fourier transform of the PSF is the Optical Transfer Function (OTF) [8-9]. As the distance from the detection plane to the image plane increases, blurring effect increases. Hence the defo-cusing is a filtering process, while OTF presents the low-pass filter[1].

      In order to use the above mentioned computation procedure for polychromatic light, two additional conditions need to be satisfied[6, 10]: (1) constant spectral composition and uniform spectral sensitivity across the detector area; (2) small variation of the local magnification with wavelength. Thus, the computation procedure is valid for a restricted class of polychromatic objects of which the radiance spectrum emitted by the objects is the same, except for an intensity scaling factor, for every point in the objects.

      The polychromatic PSF is often represented by a pillbox (cylinder) shape function[11] or Gaussian function[5], whose width relating to the blur circle (circle of confusion) around the image point is calibrated according to the parameters of the real experimental setup. The same computer modeling of the image defocusation based on the cylinder shape PSF[11] or Gaussian PSF[12] has been already developed and used, for instance, for assessing various focus measure operators[4] or reliability measure aimed at assessing the quality of the depth-map obtained using the shape from focus method[2].

      However, the Gaussian function model as a sum of single light components does not incorporate weight of the components, and both Gaussian and pillbox model do not distinguish individual factors causing distortion in intensity pattern. Among these factors the lens aberration is worth mentioning. For our purpose we use more real model of image defocusation, which describes and involves these aspects much better.

      In the presented paper we use a PSF computed as the Fourier transform of a generalized aperture function of lens[7, 13-14], which includes a focus error (a deviation from focused imaging) causing image blurring. In order to approximate to a real situation, >we take into account spectral weights of individual components of light[6, 8]. We also incorporate the chromatic aberration of the lens in consequence of a dispersion of a lens material causing additional defocusation of monochromatic light components. Our proposed model enables to influence the chromatic aberration by the use of an achromatic lens. Although, it is already known that the achromatic lenses with completely different chromatic aberration may have the same OTF[8].

      Further, in the presented paper we simulate translation of a 3D object by changing the object distance and resulting imaging of the shifted object into the detection plane. Our simulation model is based on the above-mentioned mathematical operations and involves both image defocusing and determination of the best focus position of every object point from serious of images via the SML operator. However, simulation of the image defocusation for 3D object is complicated problem, because, in general, the PSF varies for each point in the image due to both various depths and optical aberrations. In this case, the PSF is spatially variant and the convenient approach which uses Fourier transform operations cannot be used and the convolution is computed directly. Nevertheless, calculating the PSF for the each point is not a practical approach for a large number of pixels. The simplest method that can be used in complexity reduction is to divide the image into different sections and consider a constant PSF inside each section (a piecewise invariant PSF) . Then the space variant PSF can be expressed as a weighted summation of the invariant PSF[15-16]. However, rendering of individual sections of the image leads to blur discontinuity artifacts in the resulting image[15-17]. To suppress the artifacts, one of the solutions is to interpolate two adjacent PSFs to achieve smoother transition between corresponding sections[15-17]. To apply the median filter on the acquired depth map[18] can be another solution.

      The aim of the paper is to propose a numerical model for simulation of the shape from focus method. The solution of the model uses weighted summation of the invariant PSFs. The model approaches the reality, and uses the polychromatic PSF of a generalized aperture function of lens including focus error to simulate image defocusation, a spectrum of a Standard illuminant, a dispersion function of a real imaging optical system and spectral sensitivity of a real light sensitive sensor. The model allows to propose parameters of a measuring sensor based on the shape from focus method and to increase effectivity of the experimental work. It means, for example, to decrease time lag and to reduce the operating expenses caused by successive selection of unsuitable sensor′s components. The utilization of the model is presented for three optical systems, an aberration-free optical system, an optical system with chromatic aberration and an achromatic optical system. The model allows to study accuracy and reliability of the determination of the object′s surface topography by means of the shape from focus method.

    • Let us assume the detection of an image P′ of the object point P according to Fig. 1, where d1 represents the distance between the object and the lens and d2i is the distance between the image and the lens. Relation between the distances d1 and d2i and the lens focal length f can be given by the Gaussian lens law $\frac{1}{f}=\frac{1}{{{d}_{1}}}+\frac{1}{{{d}_{2i}}}$ [13]. Fig. 1 shows that the object point P is projected onto a point (Airy disc in terms of wave optics) P′ in the image plane. Let us call the detected image as the ideal image I (x′, y′) in the case of geometrical optics and focused image If (x′, y′) in the case of wave optics. In a plane at the distance d2 from the lens is the point P imaged blurred like P″ and the defocused image Id (x′, y′) is obtained. For blurred image system the focus error ε (deviation from the Gaussian law) is defined as ε= $\frac{1}{{{d}_{1}}}+\frac{1}{{{d}_{2}}}-\frac{1}{f}=\frac{1}{{{d}_{1}}}+\frac{1}{{{d}_{2i}}-\delta }-\frac{1}{f}$ , where δ=d2i-d2[13].

      Figure 1.  Detection of focused P′ and defocused P″ image of the object point P

      For simplicity, let us collectively denote focused If (x′, y′) image and defocused Id (x′, y′) image as If, d (x′, y′ ) . The relation between the ideal image intensity I (x′, y′) and the intensity If, d (x′, y′) is given by the convolution[14]:

      (1)

      where hi (x′, y′) represents the intensity PSF of the incoherent illumination lens system (intensity impulse response) derived by means of the PSF of the coherent illumination lens system (amplitude impulse response) hu (x′, y′) as hi (x′, y′) =|hu (x′, y′) |2. The integral (1) depends on the focus error ε, which changes with d1 or d2[13].

      If the condition of the spatially invariant PSF is satisfied, convolution Eq. (1) in the spatial frequency domain (vx′, vy′) is given by

      (2)

      where IF, D (vx′, vy′) , I (vx′, vy′) and Hi (vx′, vy′) are the Fourier transforms of If, d (x′, y′) , I (x′, y′) and hi (x′, y′) . Component Hi (vx′, vy′) is referred to as the optical transfer function. By means of the Fourier transform of IF, D (x′, y′) the resulting image If, d (x′, y′) is obtained.

      Let us assume that the object under investigation is a 3D object of pyramidal shape with N levels and the distribution of the intensity I (x′, y′) resembles the depth z (x, y) of the object as is shown in Fig. 2. The object is positioned according to the setup in Fig. 3. In this case the condition of spatially invariant hi (x′, y′) is fulfilled only within the limited regions A, B, …, C (gray areas) . The depth z (x, y) changes in discrete increments to, j=to (height of a single step) .

      Figure 2.  The depth z (x, y) of the 3D object and intensity distribution I (x′, y′) corresponding to its ideal image. The image is divided into the limited regions A, B, …, C (gray areas) , in which the spatially invariant hi, j (x′, y′) is computed for j=1, 2, …, N, while N denotes number of the regions. The depth z (x, y) changes in discrete increments to, j=to (height of a single step) . For simplicity, we assume imaging 1: 1, therefore widths wo, j of individual steps are the same as the widths wi, j of the appropriate regions A, B, …, C

      Figure 3.  Scheme for simulation of the shape from focus method. The pyramidal object is placed at the distance d1 from the optical system. The image of the object produced by the optical system is observed at the distance d2 by means of the detector. Light source is situated at the distance s from the object

      For simplicity, we assume imaging 1: 1, therefore widths wo, j of individual steps are the same as the widths wi, j of the appropriate regions A, B, …, C.

      For instance, if a number of levels is N=3, the PSF hi (x′, y′) is defined as

      (3)

      Convolution Eq. (1) then becomes

      (4)
      (6)
      (6)
      (7)

      Alternatively, relation Eq. (2) becomes

      (8)

      where aF (vx′, vy′) , bF (vx′, vy′) , cF (vx′, vy′) are the Fourier transforms of a, b, c.

      The PSF hi (x′, y′) can be derived by means of the Fourier transform of the generalized pupil function p1 (x, y) of the lens[7, 13-14].

      (9)

      where p (x, y) is a pupil function, and λ is wavelength of the light. For a circular aperture of the radius R the pupil function p (x, y) is p (x, y) =1 for x2+y2R2 and p (x, y) =0 for x2+y2R2. After substitution p (x, y) into relation Eq. (9) and considering polar coordinates x=r1cosφ, y=r1sinφ and x′=r2cosθ, y′=r2 sinθ the PSF Eq. (9) becomes

      (10)

      In this paper, the derivation of the PSF is based on the single lens and a single aperture. However, an actual image optical system may contain many lenses and apertures. In these cases, all these elements may be lumped into a single “black box”, and the significant properties can be completely described by specifying only the terminal properties of the aggregate[14].

    • In order to simulate more real situation, the imaging performance for the monochromatic case should be extended to the polychromatic case. Polychromatic light is modeled as white light in the wavelength range from 400 to 700 nm[7]. Polychromatic PSF hipoly (x′, y′) is given by[6]

      (11)

      where S (λ) is the spectral weight factor determined by the source-detector-filter combination and (λ1, λ2) is the range of wavelength within which S (λ) takes significant values[6, 8, 19]. Both spectrum of the light source and the spectral sensitivity of the detector are multiplied to get resulting spectral weight factor S (λ) [19].

      In case of polychromatic light the chromatic aberration as a consequence of the dispersion appears. Each of the monochromatic components of the light contributes to the overall blurring effect of the image, because each component is focused at various image distances d2i. The focus error of the individual components is expressed by the relation[13-14]:

      (12)

      which is substituted to relation Eq. (10) .

      For purposes of the paper, the above-mentioned theoretical background is applied to the simulation of the measurement of 3D object topography by means of the shape from focus method. According to the principle of the method, the object under investigation is moved along the optical axis, while a sequence of the images If, d (x′, y′) of the same scene of the object corresponding to the various object distances d1 is obtained and subsequently processed. The depth of the object is determined through searching the position where every point on the object is imaged sharply. For determination of the focused image If (x′, y′) at each image point If, d (x′, y′) , the Sum-Modified Laplacian (SML) operator to the images sequence is applied[1].

      Modified Laplacian operator is computed as[1]

      (13)

      where step represents variable space between pixels and the sum of the modified Laplacian function in a small window of size M around a point (i, j) is of form

      (14)
    • We simulate the shape from focus method according to the setup shown in Fig. 3. It consists of a source of the polychromatic light, a 3D object under investigation, an optical system and a detector. The simulation model comprises three cases: (a) aberration-free (ideal) optical system, (b) optical system with chromatic aberration and (c) achromatic optical system.

    • For simulation we use the following specific parameters:

      An object is represented by a five-level (N=5) pyramidal nontransparent object, where a 1D profile z (x) and corresponding 1D ideal image intensity profile I (x′) are shown in Fig. 4 (a) , 4 (b) . Height of the whole object is randomly chosen as 735 μm, therefore the height of the individual level (step) in the profile z (x) is to, j=to=735 μm/5=147 μm. The width of the individual step (gray areas in Fig. 2) is wo, j=180 μm for j=1, …, N-1 and wo, j =360 μm for j=N. The width wo, j of single steps at the depth map z (x, y) is the same as the width wi, j of the steps in the intensity distribution I (x′, y′) . Height of the individual level (step) in the intensity distribution I (x′, y′) is ti, j≈1/[s+ (N-j+1) ·to]2-1/[s+ (N-j) ·to]2, where s represents a distance of the source from the N-th level (top) of the pyramidal object (Fig. 3) . During simulation procedure, we set s=10 mm. In the simulation model, we additionally assume that the gray level on each object level is not constant but fluctuates. The origin of the fluctuation is in the roughness of the object′s surface. Thus the intensity distribution I (x′, y′) contains fluctuation, as is depicted in Fig. 4 (d) . The fluctuation Ifluct (x′, y′) is added to the basic intensity distribution I (x′, y′) by summation I (x′, y′) +Ifluct (x′, y′) . The values Ifluct (x′, y′) are created by the random number generator with uniform distribution (mean value of Ifluct is 5 a.u., standard deviation of Ifluct is 3 a.u.) .

      Figure 4.  (a) A one-dimensional profile z (x) of the object under test and (d) a resulting intensity distribution I (x′) , computed by summation of (b) the intensity distribution I (x′) with (c) the fluctuation Ifluct (x′)

      Distribution of the intensity I (x′, y′) is represented by a matrix of the size 902×902. Linear size of the matrix area is 2.15 mm, and distance between the neighboring points is δx=2.4 μm. The sampling for the intensity matrix is the same as the sampling for the matrices of the PSFs hi, j computed for individual object level. Resulting sampling for the intensity matrix is, however, accommodated to sampling (pixel pitch) for the matrix of the detector, because the detector has different sampling. Thus the intensity matrix has to be transformed into the matrix of the detector.

      The detector is represented by a CMOS camera, monochromatic regime (Thorlabs catalogue, item DCC1545M[20]) . Linear pixel size of the CMOS camera is δxCMOS=5.2 μm. In order to get sampling for the intensity matrix closer to the sampling for the matrix of the CMOS camera, resulting dimension of the intensity matrix is decreased to 451×451 by means of summing up 2×2 intensity values to obtain a single intensity value. Resulting sampling is then δx=4.8 μm. Information about the CMOS camera including its spectral sensitivity needed for computation S (λ) in relation Eq. (11) is available from Ref.[20].

      The light source is represented by a Standard illuminant D65, and range of the wavelength is from 400 to 700 nm (increment 10 nm) . Spectrum of the light source needed for computation S (λ) in relation Eq. (11) is available from Ref.[21].

      For simulation of the shape from focus method we use the following specific parameters for three different cases of the optical system:

      (a) aberration-free (ideal) optical system

      Diameter of lens D=12.7 mm

      Focal length f=75 mm

      Image distance d2=0.15 m

      Object distance d1=0.15 m

      (b) optical system with chromatic aberration

      Diameter of lens D=12.7 mm

      Focal length for λfoc=550 nm, f (550 nm) =75 mm

      Image distance d2=0.15 m

      Object distance d1=0.15 m

      Material:glass N-BK7. Dispersion formula n (λ) is acquired from Ref.[22]. Radius of a biconvex lens λfoc=550 nm is computed as[13] R550=2f550 (n550-1) =0.077 778 4 m. Focal length as a function of wavelength computed as[13] f (λ) =R550/2[n (λ) -1] is then substituted into Eq. (12) .

      (c) achromatic optical system

      Diameter of lens (doublet) D=12.7 mm

      Focal length for λfoc=550 nm, f (550 nm) =75 mm

      Image distance d2=0.15 m

      Object distance d1=0.15 m

      Material:mounted achromatic doublets N-BK7/SF2. Focal length increments Δf as a function of wavelength (acquired from Thorlabs catalogue, item AC127-075-A-ML[23]) are added to the focal length f (550 nm) =75 mm and substituted into Eq. (12) .

      The object movement along the optical axis is simulated by changing the object distance d1. The object distance of the j-th level is d1=do+δj, A, where do is object distance for ε=0 (level is imaged sharply) and δj, A= (A-j) ·to, j (j=1, 2, 3, 4, 5) , where A represents the sequence number j of the sharply imaged level. The change of A (A=1, 2, 3, 4, 5) corresponds to the simulated object movement. If the level is in focus, then j=A, δj, A=0 and the object distance d1=do=0.15 m.

      The obtained sequences of images for various d1 are processed by the Sum-Modified Laplacian (SML) operator according to relations Eq. (13) and Eq. (14) . The parameter step is set as step=1 according to Ref.[1]. In the method[1], in contrast to auto-focusing methods, a small window of size (3×3) is typically used. Therefore, we choose the same window size, i.e., M=1.

    • Figs. 5-7 show the results of simulation of determination of depth maps z (x, y) of the object by means of the shape from focus method for three cases: aberration-free (ideal) optical system (Fig. 5) , optical system with chromatic aberration (Fig. 6) , achromatic optical system (Fig. 7) . Moreover, for illustration, Fig. 8 shows the sum of modified Laplacian function F (i, j) computed by Eq. (14) for i=225, j=125 (the point on the third level of the pyramidal object) as a function of d1 for all three different cases of the optical system.

      Figure 5.  (a) An ideal case of the depth map z (x, y) (matrix of 451×451 pixels) of the object represented by the pyramid with 5 levels with total height 735 μm, and height of the individual level of the pyramid is 147 μm (b) a cross section of the depth map z (x, y) from (a) at a position y=225, (c) the depth map z (x, y) of the object acquired via simulation of the shape from focus method using the aberration-free optical system (d) a cross section of the depth map z (x, y) from (c) at a position y=225

      Figure 6.  (a) A depth map z (x, y) of the object under test acquired via simulation of the shape from focus method using the optical system with chromatic aberration (b) a cross section of the depth map z (x, y) at a position y=225

      Figure 7.  (a) A depth map z (x, y) of the object under test acquired via simulation of the shape from focus method using the achromatic optical system (b) a cross section of the depth map z (x, y) at a position y=225

      Figure 8.  Sum of modified Laplacian function F (i, j) computed by Eq. (14) for i=225, j=125 (the point on the third level of the pyramidal object) as a function of d1 for aberration-free (ideal) optical system, optical system with chromatic aberration and achromatic optical system. The total object height is 735 μm

      As is shown in Fig. 5, for the aberration-free optical system the acquired depth map (Fig. 5 (c) , 5 (d) ) is almost the same as the ideal case of the depth map (Fig. 5 (a) , 5 (b) ) except the boundary artifacts between adjacent object levels. As mentioned in section 1, the artifacts appear in the resulting image due to rendering of individual sections of the image[15-17]. Good agreement between original and acquired topography of the object is caused by the fact that the ideal system does not count with inherent lens aberration. These aberrations negatively influence the evaluation of the resulting depth maps, as is depicted in Fig. 6. In the case of the optical system with chromatic aberration, the distorted resulting depth map is obtained. One can conclude that due to the longitudinal lens aberration, the single lens system is not appropriate for the shape from focus method.

      In the case of achromatic optical system, the distortion in the resulting depth map is inhibited and the pyramidal shape of the object is maintained, as is shown in Fig. 7. However, in contrast with the ideal aberration-free optical system, worse agreement between the original and the acquired depth map z (x, y) of the object is achieved. The height of the whole object is 588 μm according to the results, instead of 735 μm. Further, except the first level, the positions of the individual levels in the original and acquired depth maps can not match. This can be caused by the limited resolution of the shape from focus method, which is defined by the depth of focus Dof of the lens[24]. Depth of focus Dof can be derived as Dof=1.22λ/ (NA) 2=1.22λ/ (sinθ) 2, where NA is the numerical aperture of the optical system and θ represents the half-angle subtended by the exit pupil when viewed from the image plane[14, 24]. In this case (for λ=550 nm) Dof=1.22λ/ (sinθ) 2≈1.22λ (2d2/D) 2=374 μm. In our case, the step between two successive images is the same as the height of the individual level to, j =147 μm. Therefore, in order to approach to the resolution limit of the method, let us increase the distance to to to=294 μm. The object′s height is now vo·N=1 470 μm. The result of the simulation for this case is shown in Fig. 9. Fig. 10 shows the corresponding sum of modified Laplacian function F (i, j) computed by Eq. (14) for i=225, j=125 (the point on the third level of the pyramidal object) as a function of d1.

      Figure 9.  (a) A one-dimensional profile z (x) of the object under test, (b) a depth map z (x, y) of the object under test acquired via simulation of the shape from focus method using the achromatic optical system (c) a cross section of the depth map z (x, y) at a position y=225. The total object′s height is 1470 μm

      Figure 10.  Sum of modified Laplacian function F (i, j) computed by Eq. (14) for i=225, j=125 (the point on the third level of the pyramidal object) as a function of d1 for achromatic optical system. The total object height is 1470 μm

      In comparison with Fig. 7, better results are achieved. The acquired total height of the object as well as height of the individual levels of object under investigation corresponds to the object height profile in Fig. 9 (a) . However, due to bigger blur discontinuity between two adjacent levels the more significant artifacts mentioned in section 1 appear. To suppress the artifacts, the median filter on the acquired depth map can be applied[18].

    • The presented results show that the real sensor based on the shape from focus method requires the achromatic optical system. Next the results show that the step of object displacement and the depth of focus of the optical system influence reliability of the method. The model approaches the reality, and uses imaging of the 3D object in polychromatic light, the Standard illuminant D65 as a source of light, a real CMOS camera and dispersion functions of optical systems with chromatic and minimized chromatic aberrations. Presented model can be used for the study of an effect of the experimental parameters on the accuracy and reliability of the object′s depth map determination. One can conclude the model allows to increase effectivity of the experimental work, to decrease time lag and to reduce the operating expenses caused by selection of unsuitable sensor′s components.

参考文献 (24)

目录

    /

    返回文章
    返回