Processing math: 14%

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

Lipid segmentation method based on magnification endoscopy with narrow-band imaging

WU Zhi-sheng ZOU Hong-bo ZHU Wen-wu QI Wei-ming WANG Li-qiang YUAN Bo YANG Qing XU Xiao-rong YAN Hui-hui

WU Zhi-sheng, ZOU Hong-bo, ZHU Wen-wu, QI Wei-ming, WANG Li-qiang, YUAN Bo, YANG Qing, XU Xiao-rong, YAN Hui-hui. Lipid segmentation method based on magnification endoscopy with narrow-band imaging[J]. Chinese Optics, 2024, 17(4): 982-994. doi: 10.37188/CO.EN-2023-0024
Citation: WU Zhi-sheng, ZOU Hong-bo, ZHU Wen-wu, QI Wei-ming, WANG Li-qiang, YUAN Bo, YANG Qing, XU Xiao-rong, YAN Hui-hui. Lipid segmentation method based on magnification endoscopy with narrow-band imaging[J]. Chinese Optics, 2024, 17(4): 982-994. doi: 10.37188/CO.EN-2023-0024
武治晟, 邹鸿博, 朱文武, 齐伟明, 王立强, 袁波, 杨青, 徐晓蓉, 严蕙蕙. 基于窄带成像及放大内镜的脂质分割方法[J]. 中国光学(中英文), 2024, 17(4): 982-994. doi: 10.37188/CO.EN-2023-0024
引用本文: 武治晟, 邹鸿博, 朱文武, 齐伟明, 王立强, 袁波, 杨青, 徐晓蓉, 严蕙蕙. 基于窄带成像及放大内镜的脂质分割方法[J]. 中国光学(中英文), 2024, 17(4): 982-994. doi: 10.37188/CO.EN-2023-0024

Lipid segmentation method based on magnification endoscopy with narrow-band imaging

doi: 10.37188/CO.EN-2023-0024
Funds: Supported by the National Natural Science Foundation of China (No. T2293751); the National Key Research and Development Program of China (No. 2021YFC2400103)
More Information
    Author Bio:

    Wu Zhi-sheng (1998—), male, born in Yuanping, Shanxi Province. Master’s degree. He obtained his master’s degree from Zhejiang University in 2023. His research interests are endoscopic imaging technology and medical image processing. E-mail: 22030077@zju.edu.cn

    Qi Wei-ming (1966—), male, born in Tiantai, Zhejiang Province, bachelor’s degree, professional Senior Engineer. He obtained his bachelor’s degree from Zhejiang University in 1990. Currently working at the Zhejiang Center for Medical Device Evaluation, he mainly engages in research of medical device testing technology and safety evaluation. E-mail: qiweiming@zjmde.org.cn

    Wang Li-qiang (1977—), male, born in Weinan, Shaanxi Province. Associate Professor, Doctoral Supervisor, College of Optical Science and Engineering, Zhejiang University. He received his Ph.D. degree from Zhejiang University in 2004. His research interests are optoelectronic imaging technology and endoscopy. E-mail: wangliqiang@zju.edu.cn

    Corresponding author: qiweiming@zjmde.org.cnwangliqiang@zju.edu.cn

基于窄带成像及放大内镜的脂质分割方法

详细信息
  • 中图分类号: TP391.41

  • 摘要:

    一种主要成分是脂质的白色不透明物质(WOS)会覆盖与癌症诊断有关的微观结构,但WOS的形态特征又与肿瘤分级有密切关系。为了给医生提供更多与脂质相关的可用信息,本文对脂质图像的分割方法进行了研究。首先,介绍了基于Retinex框架的脂质图像增强算法,并介绍了反光去除算法。然后,介绍了基于活动轮廓模型的脂质分割方法,该方法从校正后的色调值中提取局部信息,从强度值中提取全局信息,自适应地获得权重因子,并基于初始轮廓来分割脂质区域。最后,基于自研细胞内镜成像系统,设计了仿体实验来验证了该方法的有效性。实验结果表明,该分割方法的像素准确度、灵敏度、Dice系数均高于90%。该方法能够克服照明不均匀、反光等的影响,很好地反映脂质的形状,为医生提供可用的信息。

     

  • Cancer is a long-term threat to human health, and the mortality rate of digestive tract cancers has always been at the forefront of all kinds of cancers[1]. However, the current detection rate of early-stage digestive tract cancers remains very low, mainly because early-stage cancer only shows subtle mucosal changes and is difficult to detect in time. Therefore, improving the ability to diagnose digestive tract cancers early is essential.

    Endoscopic diagnosis of early-stage digestive tract cancer mainly includes two steps: the detection of cancers and the differentiation of cancerous and noncancerous lesions[2]. At present, a variety of mature image enhancement methods have been used for screening and detection of early-stage cancer[3]. Among them, magnification endoscopy with narrow-band imaging (ME-NBI) is a very effective method[4-5] to help doctors obtain key microstructure information for early cancer diagnosis. At present, ME-NBI is widely used to detect digestive tract cancer[6-7].

    On the basis of ME-NBI, the vessel plus surface (VS) classification system using magnifying endoscopy to differentiate cancerous and noncancerous lesions was first proposed by Yao et al.[6]. Following this, a standardized set of diagnostic systems called MESDA-G was proposed in 2016[8].

    In recent years, a white opaque substance (WOS) was found on the surface of some tumors. This WOS is formed by the accumulation of tiny lipid droplets in and under the epithelium[9]. It strongly reflects and scatters visible light, masking the blood vessels on the surface of the gastric mucosa. This phenomenon was first reported by Yao in 2008[10], following which WOS was also confirmed in other areas of the digestive tract, including the stomach, large intestine, colon, rectum, and duodenum[9, 11-15].

    When the microvascular (MV) and microsurface (MS) patterns are covered, differences in the morphological features of WOS serve as useful indicators in the differential diagnosis of carcinoma and adenoma[2, 16]. Studies have shown that the WOS of carcinoma shows a disorganized and asymmetrical distribution, forming an irregular speckled pattern. In contrast, the WOS of adenoma shows a well-organized and symmetrical distribution, forming a regular reticular/maze-like pattern[10]. In addition, the coverage area of WOS can also be a useful indicator when judging the grade of lesions[17]. However, it is difficult to master the diagnostic skills of ME-NBI in a short time, so nonprofessional endoscopists are more likely to misdiagnose when facing a large number of ME-NBI images. Therefore, if the morphology and lipid coverage of WOS images can be accurately obtained, the diagnostic accuracy of non-professional endoscopists will be improved, thus preventing unnecessary errors and improving patients' health.

    Computer-aided diagnosis is premised on image segmentation technology; however, compared with conventional images, ME-NBI images have less color space, less rich geometric textures, and worse lighting conditions, making it particularly difficult to segment such images. Researchers often trust doctors to perform manual segmentation[18], but this practice is time-consuming and laborious. In recent years, some algorithms have been proposed for lesion segmentation in ME-NBI images[18-20], but there is no segmentation method for WOS images. Therefore, the original intention of the research presented in this paper was to fill the gap in this area.

    Active contour models (ACM) have been widely used in various fields since the snakes method[21] was proposed. The basic idea of ACM is to represent the active contour as the zero-level set of the high-dimensional implicit function, called the level set function, and to evolve the level set function according to the partial differential equation. Nevertheless, the classic region-based models are mainly based on gray images, which are aimed at blood vessels and optical coherence tomography (OCT) images. As such, they are not suitable for processing WOS images under ME-NBI.

    We analyse the tissue composition of WOS and the characteristics of ME-NBI images and then propose an active contour model based on the local modified hue value and global intensity value to segment lipid regions, providing more information to doctors. The contributions made in the present paper are as follows.

    (1) Enhancing the details and correcting the uneven illumination under the Retinex framework. Correcting the specular reflections based on the detail layer.

    (2) Proposing the active counter model, which combines the modified hue and intensity values. Adaptively obtaining the weight factors of local and global information and iterating from the initial contour obtained by the adaptive threshold method[22].

    (3) Making tissue phantom with optical properties similar to those of porcine gastric tissue and designing experiments to obtain ME-NBI images of lipids.

    (4) Applying the proposed method to ME-NBI images to verify its effectiveness by comparing segmentation results with the manual annotations based on three comparison metrics.

    This section mainly introduces the lipid image analysis method. First, the pre-processing method is introduced. Second, the segmentation method based on local modified hue value and global intensity value is proposed, including the description of the (local and global intensity fitting, LGIF) model, modified hue extraction, adaptive weighting function, gradient descent flow equation, and implementation of the proposed method. Finally, the comparison metrics are introduced to quantitatively describe the accuracy of segmentation methods. Fig. 1 summarizes all steps of the lipid image analysis method.

    Figure  1.  Overview of the systematically analyzing method of lipid images

    The arrangements are organized as follows: Section 2.1 introduces the pre-processing method, including enhancing and correcting specular reflections. Sections 2.2 to 2.5 describe the proposed method for WOS segmentation. Finally, Section 2.6 introduces comparison metrics used to quantitatively describe the accuracy of segmentation models.

    Endoscopic images often suffer from poorly defined edges and inhomogeneous illumination, which can affect the segmentation accuracy. In this paper, we apply Retinex frame to enhance the images. First, the base layer b(x) is calculated by convoluting the local Gaussian filter with the original image I(x), and the detail layer d(x) is calculated by subtracting b(x) from I(x). Then, by performing homomorphic filtering on the luminance channel of b(x), the inhomogeneous brightness can be corrected to obtain b(x). Ultimately, the final enhanced image g(x) is obtained by amplifying d(x) by some factor k (> 1), which is then added back to the b(x):

    g(x)=b(x)+kd(x). (1)

    When doctors use an endoscope as a diagnostic tool, the specular reflection areas will affect observation. In addition, the high brightness brought by specular reflection can affect the area’s average grayscale or gradient value, thereby affecting the accuracy of segmentation based on global or local values.

    For the lipid images, the reflective areas are highly pronounced in the detail layer; thus, the histogram of the detail layer shows its value as significantly higher than the main distribution. As a result, this part can also be used as the reflective detection results Rs. By inpainting these areas, most specular reflections can be removed. The specific steps of preprocessing are shown in Fig. 2.

    Figure  2.  The procedure of pre-processing

    The LGIF model was proposed by Wang et al.[23] to achieve more accurate segmentation by combining the advantages of the Chan-Vese (C-V)[24] model and the local binary (LBF) model[25].

    The C-V model is one of the most well-known region-based models. It performs segmentation based on the assumption that the original image is piecewise constant and is insensitive to initial contours. However, the C-V model cannot handle image inhomogeneity, which is very common in medical images.

    The LBF model is used to solve this problem by introducing two spatially varying fitting functions to replace the constants in the C-V model. However, this localized property may introduce many local minima, resulting in an incorrect segmentation result. The accuracy of the result is more dependent on the initial contour, which will greatly affect the robustness of the segmentation results.

    These two methods are combined by the LGIF method, which has higher robustness. The energy function of this method is defined as follows:

    {\boldsymbol{E}}^{{\mathrm{LGIF}}}=\left(1-\omega \right){\boldsymbol{E}}^{{\mathrm{LBF}}}+\omega {\boldsymbol{E}}^{{\mathrm{CV}}} \quad, (2)

    where ω is the positive constant (0 ≤ ω ≤ 1). When the images are corrupted by intensity inhomogeneity, a small parameter value ω should be chosen[26].

    The LGIF mentioned above can get comparatively accurate results when processing medical images such as computed tomography (CT) images.

    However, due to the complex texture of lipid images under NBI and the strong reflection of lipids on the visible light band, it is difficult to distinguish between lipid regions and saturated illumination regions by comparing intensity values. It is also difficult to segment lipid regions in low illumination regions, which will result in significant segmentation errors (as shown in Fig. 3, color online). Therefore, we propose a method that combines the modified hue value and intensity value to adaptively obtain the weight factors of local and global information to segment lipid regions accurately.

    Figure  3.  Lipid region segmentation results based on the LGIF model. (a) Intensity value; (b) segmentation contours (red rectangles mark incorrect segmentation areas); (c) segmentation results

    If the original image is converted from RGB color space to HSI color space, the background and the lipid region can be distinguished because the mucosa surface appears red or orange under ME-NBI due to the presence of hemoglobin. Its hue value is low, while the lipid-covered region appears blue-green and has a hue value quite different from the background. Therefore, the hue value can be imported into the active contour model. However, there is some noise in the low-lighting parts, which can cause mistakes. Therefore, we introduce the following equation to correct the hue value:

    {I}_{m}\left(x\right)=\frac{\left({I}_{{\mathrm{h}}}\left(x\right)\cdot {I}_{{\mathrm{I}}}\left(x\right)\right)}{\mathrm{max}\left({I}_{{\mathrm{h}}}\right)} \quad, (3)

    where {I}_{{\mathrm{h}}}\left(x\right) respects the pixel’s hue value and {I}_{{\mathrm{I}}}\left(x\right) respects the pixel’s intensity value. The modified value can eliminate the noise, and the result is shown in Fig. 4.

    Figure  4.  (a) The pixel’s hue value; (b) the pixel’s intensity value; (c) the modified image

    Eventually, the energy functional about the modified hue value is obtained under the LBF model. Using the Heaviside function H( \phi )[24], the local intensity fitting energy is defined as follows:

    \begin{split} &{E}^{L}\left(\phi ,{I}_{m1},{I}_{m2}\right)=\\ &{\lambda }_{1}\int \left[\int {K}_{\sigma }\left(x-y\right){\left|{I}_{m}\left(y\right)-{I}_{m1}\left(x\right)\right|}^{2}H\left(\phi \left(\boldsymbol{y}\right)\right){\mathrm{d}}\boldsymbol{y}\right]{\mathrm{d}}x+\\ &{\lambda }_{2}\int \left[\int {K}_{\sigma }\left(x-y\right){\left|{I}_{m}\left(y\right)-{I}_{m2}\left(x\right)\right|}^{2}\right.\\ &\left.\left(1-H\left(\phi \left(\boldsymbol{y}\right)\right)\right){\mathrm{d}}\boldsymbol{y}\right]{\mathrm{d}}x\quad,\\[-3pt]\end{split} (4)

    where \phi is the level set function, {K}_{\sigma } is a Gaussian kernel with standard deviation \sigma , {I}_{m1},{I}_{m2} are the weighted average of the modified hue value in the Gaussian window inside and outside the contour respectively, and the parameters {\lambda }_{1} , {\lambda }_{2} are nonnegative constants.

    Since the g-channel contrast of the lipid image is the highest, the value of the green channel is imported into the C-V model as the global correction term. The global intensity fitting energy is defined as follows:

    \begin{split} {E}^{G}(\phi ,{c}_{1},{c}_{2})=&{\lambda }_{1}\int {\left|{I}_{g}\left(x\right)-{c}_{1}\right|}^{2}H\left(\phi \left(\boldsymbol{x}\right)\right){\mathrm{d}}x+\\ &{\lambda }_{2}\int {\left|{I}_{g}\left(x\right)-{c}_{2}\right|}^{2}\left(1-H\left(\phi \left(\boldsymbol{x}\right)\right)\right){\mathrm{d}}x,\end{split} (5)

    where the constants {c}_{1},{c}_{2} approximate the image intensity outside and inside the contour, respectively, and {I}_{g} is the value of the green channel.

    Combining the two above energies, the entire energy function can be defined as:

    \begin{split} &{E}^{{\mathrm{LGIF}}}\left(\phi ,{I}_{m1},{I}_{m2},{c}_{1},{c}_{2}\right)=\left(1-w\right){\boldsymbol{E}}^{L}\left(\phi ,{I}_{m1},{I}_{m2}\right)+\\ &w{\boldsymbol{E}}^{G}\left(\phi ,{c}_{1},{c}_{2}\right)+\mu P\left(\phi \right)+\nu L\left(\phi \right) \quad,\\[-3pt] \end{split} (6)

    where L\left(\phi \right) is length regularization term, and L\left(\phi \right)= {\displaystyle\int }_{\Omega }\left|\nabla H\left(\phi \left(x\right)\right)\right|{\mathrm{d}}x and extra internal energy {P}\left(\phi \right)={\displaystyle\int }_{\Omega }\dfrac{1}{2}{\left(\nabla \phi \left(x\right)-1\right)}^{2}{\mathrm{d}}x are added to derive a smooth contour during evolution and avoid re-initialization of the level set function[27], respectively, and where \mu > 0, \nu > 0 ,w (0 ≤ w ≤ 1) are constants as the weights of each term.

    In practice, the Heaviside function H\left(\phi \right) can be approximated by a smooth function {H}_{\varepsilon }\left(\phi \right) [23]. By the calculus of variations, the functions {I}_{m1}\left(x\right),{I}_{m2}\left(x\right) and constants {I}_{g1},{I}_{g2} can be computed as follows:

    {I}_{g1}=\frac{\displaystyle\int {I}_{g}\left(x\right){H}_{\mathrm{\varepsilon }}\left(\mathrm{\phi }\left(x\right)\right){\mathrm{d}}x}{\displaystyle\int {H}_{\mathrm{\varepsilon }}\left(\mathrm{\phi }\left(x\right)\right){\mathrm{d}}x}\quad, (7)
    {I}_{g2}=\frac{\displaystyle\int {I}_{g}\left(x\right)\left({1-H}_{\mathrm{\varepsilon }}\left(\mathrm{\phi }\left(x\right)\right)\right){\mathrm{d}}x}{\displaystyle\int \left({1-H}_{\mathrm{\varepsilon }}\left(\mathrm{\phi }\left(x\right)\right)\right){\mathrm{d}}x}\quad, (8)
    {I}_{m1}\left(x\right)=\frac{{K}_{\sigma }\left(x\right)\mathrm{*}\left[{H}_{\mathrm{\varepsilon }}\left(\mathrm{\phi }\left(x\right)\right){I}_{m}\left(x\right)\right]}{{K}_{\sigma }\left(x\right)\mathrm{*}{H}_{\mathrm{\varepsilon }}\left(\mathrm{\phi }\left(x\right)\right)}\quad, (9)
    {I}_{m2}\left(x\right)=\frac{{K}_{\sigma }\left(x\right)\mathrm{*}\left[\left(1-{H}_{\mathrm{\varepsilon }}\left(\mathrm{\phi }\left(x\right)\right)\right){I}_{m}\left(x\right)\right]}{{K}_{\sigma }\left(x\right)\mathrm{*}\left[1-{H}_{\mathrm{\varepsilon }}\left(\mathrm{\phi }\left(x\right)\right)\right]} \quad. (10)

    Then we can achieve minimization of the functional E\left(\phi ,{I}_{m1},{I}_{m2},{I}_{g1},{I}_{g2}\right) in Eq. (6) with respect to \phi by deducing the gradient descent flow equation

    \begin{split} \frac{\partial \phi }{\partial t}=&{\mathrm{\delta }}_{\mathrm{\varepsilon }}\left(\phi \right)\left({F}_{1}+{F}_{2}\right)+\nu {\mathrm{\delta }}_{\mathrm{\varepsilon }}\left(\phi \right)\text{d}\text{i}\text{v}\left(\frac{\nabla \phi }{\left|\phi \right|}\right)+\\ &\mu \left[{\nabla }^{2}\phi -\text{div}\left(\frac{\nabla \phi }{\left|\phi \right|}\right)\right]\quad,\end{split} (11)
    \begin{split} {F}_{1}=&\left(1-w\left(x\right)\right)[{-\lambda }_{1}{\int }_{\Omega }{K}_{\sigma }\left(y-x\right){\left|{I}_{m}\left(x\right)-{I}_{m1}\left(y\right)\right|}^{2}{\mathrm{d}}\boldsymbol{y}+\\ &{\lambda }_{2}{\int }_{\Omega }{K}_{\sigma }\left(y-x\right){\left|{I}_{m}\left(x\right)-{I}_{m2}\left(y\right)\right|}^{2}{\mathrm{d}}\boldsymbol{y}] \quad,\\[-5pt] \end{split} (12)
    {F}_{2}=w\left(x\right)\left[{-\lambda }_{1}{\left|{I}_{g}\left(x\right)-{I}_{g1}\right|}^{2}+{\lambda }_{2}{\left|{I}_{g}\left(x\right)-{I}_{g2}\right|}^{2}\right] , (13)

    where {\delta }_{\varepsilon }\left(\phi \right) is the derivative of {H}_{\varepsilon }\left(\phi \right) and w\left(x\right) is an adaptive weighting that adjusts the proportion of local-based and global-based energy. In the region close to the edge of the lipids, the weight of the local term should be increased. In the region far away from the edge of the lipids where intensities vary slowly, the weight of the global-based energy should be increased. Referencing the work of [28], the following weight function is obtained:

    w\left(x\right)=k\cdot average\left({C}_{N}\right)\cdot \left({1-C}_{N}\left(x\right)\right)\quad, (14)

    where k is a fixed constant selected according to the gray distribution characteristics of the image, and {C}_{N} is the local contrast of the given image, which is defined as follows:

    {C}_{N}\left(x\right)=\frac{{\mathrm{max}\left({I}_{m}\right)}_{N}-{\mathrm{min}\left({I}_{m}\right)}_{N}}{{\mathrm{max}\left({I}_{m}\right)}_{g}}\quad, (15)

    where N is the size of the local window, {\max\left({I}_{m}\right)}_{N} and {\min\left({I}_{m}\right)}_{N} are the maximum and minimum of the modified hue values within this local window, respectively. In addition, {\max\left({I}_{m}\right)}_{g} represents the intensity level of the image. {C}_{N} ranges from 0 to 1, and reflects how rapidly the value changes in a local region. The average\left({C}_{N}\right) is the average value of {C}_{N} over the whole image, representing the image’s overall contrast information. In homogeneous areas, the global term is dominant, and the weight function w\left(x\right) is large. In inhomogeneous areas, the local term is dominant and the weight function 1 − w\left(x\right) is large.

    The proposed model has a certain sensitivity to the initial contour. In order to prevent the selected initial contour from causing erroneous segmentation results and reduce the number of iterations, the result obtained by the threshold method is used as the initial contour. The initial level set function {\phi }_{0} is defined as follows:

    {\phi }_{0}\left(x,t=0\right)=\left\{\begin{split} &-1,\quad x\in {\mathrm{\Omega }}_{0}\\ &1,\quad\quad {\mathrm{else}}\end{split}\right. \quad, (16)

    where {\mathrm{\Omega }}_{0} is the inner region of the initial contour. The algorithm steps of our method can be written as follows:

    Step 1: Place the initial contour C using the threshold method and initialize the level set function {\phi }_{0} .

    Step 2: Set the values of various parameters, such as \sigma , {\lambda }_{1} , {\lambda }_{2},\mathrm{\Delta }t,\mu ,k,and \;\nu .

    Step 3: Calculate the local term {F}_{1} and the global term {F}_{2} .

    Step 4: Evolve the level set function \phi according to Eq. (11).

    Step 5: Return to Step 3 until the convergence criteria (the number of pixels that change in one iteration is less than 20) is met.

    To quantitatively describe the accuracy of lipid region segmentation, we uniformly use the accuracy, sensitivity, and Dice coefficient to compute the quantitative accuracy between segmentation results obtained by segmentation methods with ground truth (manual annotations). Then the accuracy is defined as

    { A}=\frac{{T}{P}+{T}{N}}{{T}{P}+{F}{P}+{F}{N}+{T}{N}} \quad, (17)

    as well as the sensitivity

    {S}_{e}=\frac{TP}{TP+TN} \quad, (18)

    and the Dice coefficient

    D=\frac{2{T}{P}}{2{T}{P}+{F}{P}+{F}{N}} \quad, (19)

    where TP denotes true positives, FP denotes false positives, FN denotes false negatives, and TN denotes true negatives. These three indices are commonly used to measure the quality of medical image segmentation. Their value ranges from 0 to 1, with a higher value representing a more accurate segmentation result.

    To verify the effectiveness of the proposed method, we performed phantom experiments under ME-NBI imaging conditions. The phantom experiment was conducted on a phantom with an agar matrix and coated with lipid to simulate the WOS on the gastrointestinal tract. Phantom experiments were conducted to evaluate the performance of the method described in this paper for lipid analysis, and finally, the comparison metrics of segmentation are introduced. The specific experimental procedure is shown in Fig. 5.

    Figure  5.  Experimental procedure

    The system used in this experiment is a self-developed high-magnification endocytoscopic imaging system with a wide field of view and a maximum magnification of 500 times, which can achieve clear imaging of tiny structures (as shown in Fig. 6(a)). The light source used in the system includes three modes: white light, blue-white light, and blue-green light. The blue-green light source consists of blue (415 nm) and green (540 nm) narrow-band LEDs (as shown in Fig. 6(c)) to illuminate at the same time. The image of each channel is fused to obtain the final image after CMOS receives the signals. The complete prototype system is shown in Fig. 6. The system’s details can refer to our earlier published paper[29]. This prototype system captured the phantom images.

    Figure  6.  The prototype of the endocytoscopic imaging system. (a) The mobile workstation, including the lightbox, the video system center, and endocytoscope; (b) the knob for amplification and attitude change; (c) structure of light source; (d) the tip of the endocytoscope

    All the phantom and in vitro samples were fixed by 2.5 g agar, 70 ml distilled water, absorber, and scatterer. The titanium dioxide was used to simulate the scattering characteristics of biological tissues[30] and the absorption of light was simulated by quinoline yellow and bovine hemoglobin. The solution was heated and poured onto a 3D printing model with the classic WOS texture shape, demolded after solidification at a low temperature, and coated with lipid to simulate the WOS (as shown in Fig. 7(a)~7(b) (color online)). The phantom reflectance spectrum was measured by spectrometer and compared with that of a pig stomach. The results showed that the phantoms could effectively simulate the main spectral characteristics of pig stomach (as shown in Fig. 7(c) (color online)).

    Figure  7.  The experimental subjects. (a) The phantom obtained by demoulding from the 3D-printing model; (b) the phantom covered with lipid; (c) a comparison of the reflection spectra between the phantom and pig stomach

    We captured lipid images with different imaging conditions and shape textures using a self-developed endocytoscope. The NBI model uses two narrow bands of blue and green light for illumination, which correspond to two absorption peaks of hemoglobin, reducing the brightness of the hemoglobin-containing gastric mucosa surface. In addition, lipids strongly reflect visible light, so the brightness and hue of lipids will create a strong contrast with surrounding tissues. The comparison results of phantom images under white light and narrow-band light illumination show that NBI has a significant enhancement effect on lipids (as shown in Fig. 8, color online) compared with white light imaging (WLI). On this basis, better observation or segmentation results can be obtained.

    Figure  8.  The color, hue value, and intensity images of (a) NBI and (b)WLI

    The specular reflection correction algorithm can improve the observation performance and segmentation accuracy. The correction results are shown in Fig. 9 (color online).

    Figure  9.  (a) The enhanced images, (b) reflective detection results, and (c) inpainting results

    The ME-NBI images of phantoms were captured using the self-developed endocytoscope. The lipid boundaries in these images are weak, making segmentation of lipid regions in the background difficult. However, our method can overcome weak boundaries and accurately extract regions of interest based on the set initial contours. All codes are run with Matlab2021a under PC environment (CPU: AMD Ryzen 7 5800H 3.20 GHz, RAM 16.00 GB). Unless otherwise specified, the following default parameter settings were used in the experiments: σ=5.0, {\lambda }_{1}={\lambda }_{2}=1.0 , time step \mathrm{\Delta }t=0.1 , \mu =1.0, k=0.1 , and \nu =0.001\times 255\times 255 .

    When doctors make a lipid-related diagnosis, the main basis for evaluating the degree of lesions is the coverage of the lipid region and its morphological distribution characteristics, both of which can be obtained through the proposed method. Fig. 10 (color online) shows this method’s segmentation results on lipid images with different lighting conditions and morphological features, which shows that the method can effectively segment lipid regions and accurately display the morphology of lipid regions. LabelMe (an image annotation tool)[31] was then used to manually annotate lipid regions on these images as ground-truths to calculate segmentation accuracy.

    Figure  10.  Enhancement and segmentation results. (a) Initial images; (b) segmentation results; (c) manual annotations

    Table 1 displays the proposed method’s accuracy, sensitivity, and Dice value for each image. The values of all compared metrics are above 90%. Overall, this image segmentation model can accurately segment the lipid reion based on the local correction hue value and global brightness value.

    Table  1.  The accuracy, sensitivity, and Dice values of the proposed method
    Test image A/% Se/% D
    Test_1 91.23 90.47 0.9029
    Test_2 93.23 91.65 0.9234
    Test_3 91.61 93.33 0.9169
     | Show Table
    DownLoad: CSV

    Adaptive thresholding is a form of thresholding that accounts for spatial variations in illumination and uses the result as the initial contour to prevent errors in evolution. In order to verify the significance of obtaining the initial contour by the adaptive threshold method in advance, the segmentation result from the conventional triangular contour was compared with the result from the counter obtained by the adaptive threshold method (shown in Fig .11 (color online), where blue lines mark the segmentation boundary). It can be clearly seen that an initial profile with regular shapes such as triangles may lead to an incorrect result. This is mainly because the textured shape of the lipid region is intricate and different from the initial contour.

    Figure  11.  Segmentation results of triangular initial contour. (a) Initial contour; (b) incorrect segmentation results (blue lines mark the segmentation boundary)

    To verify the influence of adaptive weight factors on segmentation results, the segmentation results using different fixed weight factors were compared with those using adaptive weights. The results are shown in Fig. 12 (color online). The results indicate that using adaptive weights can improve the robustness of the algorithm, and accurate segmentation results can be obtained under different illumination conditions.

    Figure  12.  Segmentation results using different weight factors

    The accuracy of the proposed method was compared with three other methods, the C-V model, LBF model, and LGIF model. For this comparison, these methods used the same initial contours obtained by the adaptive threshold method. Fig.13 (color online) shows the input images, the results obtained by the C-V model, the LGIF model, the proposed method, and manual annotations.

    Figure  13.  (a) Input images; segmentation results obtained by (b) C-V model; (c) LBF model; (d) LGIF model; (e) the proposed method; (f) manual annotations

    The split CPU time and number of iterations are shown in Table 2. It can be seen that the C-V model is very fast, but struggles to obtain correct segmentation results due to the use of intensity mean as the only classification criterion. The LBF model is sensitive to the initial contour. When the adaptive threshold method result is used as the initial contour, LBF can obtain more accurate results, but it takes more time. The LGIF model can segment images with uneven intensity, but its segmentation effect is unstable. From the results, it can be seen that the proposed method outperforms the other three models in the segmentation of lipid images.

    Table  2.  The segmentation time and iteration numbers
    C-V LBF LGIF proposed
    iterations Time(s) iterations Time(s) iterations Time(s) iterations Time(s)
    Image1 33 2.6089 19 1.4171 18 1.3352 9 0.8359
    Image2 47 3.7690 48 3.6305 64 5.0240 6 0.6095
    Image3 27 2.5913 45 4.5998 19 1.7729 9 0.8001
     | Show Table
    DownLoad: CSV

    Since this endoscope has yet to obtain a clinical license, clinical images cannot be obtained, so the published WOS images for lipid area segmentation are presented as verification of the practicability of the proposed algorithm. The results are shown in Fig. 14 (color online). The results show that the segmentation of the lipid region is accurate, which proves that this method also performs well on clinical images.

    Figure  14.  Segmentation of WOS images in previous papers[10, 12]. (a) Initial images; (b) segmentation contours; (c) segmentation results

    We propose a systematic analysis method for the digestive tract lipid image captured by ME-NBI. First, the image is enhanced under the Retinex framework, and Specular reflections are corrected. Second, the lipid region is segmented by the modified active contour model, which uses a modified hue value to eliminate noise and improve contrast and uses the initial contour obtained by the adaptive threshold method and the adaptive weight factor to obtain more accurate results. Finally, the accuracy of the proposed method is quantitatively evaluated by comparison metrics. The proposed algorithm was verified by experiments on self-designed phantoms. The average pixel accuracy of the proposed method reached 91.11% which demonstrated that the lipid region can be accurately segmented. At present, this work is mainly carried out in the laboratory, so we are planning to carry out more extensive animal experiments and combine our method with clinical applications in the future.

    The authors declare that there are no conflicts of interest relevant to this article.

  • Figure 1.  Overview of the systematically analyzing method of lipid images

    Figure 2.  The procedure of pre-processing

    Figure 3.  Lipid region segmentation results based on the LGIF model. (a) Intensity value; (b) segmentation contours (red rectangles mark incorrect segmentation areas); (c) segmentation results

    Figure 4.  (a) The pixel’s hue value; (b) the pixel’s intensity value; (c) the modified image

    Figure 5.  Experimental procedure

    Figure 6.  The prototype of the endocytoscopic imaging system. (a) The mobile workstation, including the lightbox, the video system center, and endocytoscope; (b) the knob for amplification and attitude change; (c) structure of light source; (d) the tip of the endocytoscope

    Figure 7.  The experimental subjects. (a) The phantom obtained by demoulding from the 3D-printing model; (b) the phantom covered with lipid; (c) a comparison of the reflection spectra between the phantom and pig stomach

    Figure 8.  The color, hue value, and intensity images of (a) NBI and (b)WLI

    Figure 9.  (a) The enhanced images, (b) reflective detection results, and (c) inpainting results

    Figure 10.  Enhancement and segmentation results. (a) Initial images; (b) segmentation results; (c) manual annotations

    Figure 11.  Segmentation results of triangular initial contour. (a) Initial contour; (b) incorrect segmentation results (blue lines mark the segmentation boundary)

    Figure 12.  Segmentation results using different weight factors

    Figure 13.  (a) Input images; segmentation results obtained by (b) C-V model; (c) LBF model; (d) LGIF model; (e) the proposed method; (f) manual annotations

    Figure 14.  Segmentation of WOS images in previous papers[10, 12]. (a) Initial images; (b) segmentation contours; (c) segmentation results

    Table  1.   The accuracy, sensitivity, and Dice values of the proposed method

    Test image A/% Se/% D
    Test_1 91.23 90.47 0.9029
    Test_2 93.23 91.65 0.9234
    Test_3 91.61 93.33 0.9169
    下载: 导出CSV

    Table  2.   The segmentation time and iteration numbers

    C-V LBF LGIF proposed
    iterations Time(s) iterations Time(s) iterations Time(s) iterations Time(s)
    Image1 33 2.6089 19 1.4171 18 1.3352 9 0.8359
    Image2 47 3.7690 48 3.6305 64 5.0240 6 0.6095
    Image3 27 2.5913 45 4.5998 19 1.7729 9 0.8001
    下载: 导出CSV
  • [1] SIEGEL R L, MILLER K D, JEMAL A. Cancer statistics, 2020[J]. CA: A Cancer Journal for Clinicians, 2020, 70(1): 7-30. doi: 10.3322/caac.21590
    [2] MIYAOKA M, YAO K, TANABE H, et al. Diagnosis of early gastric cancer using image enhanced endoscopy: a systematic approach[J]. Translational Gastroenterology and Hepatology, 2020, 5: 50. doi: 10.21037/tgh.2019.12.16
    [3] KODASHIMA S, FUJISHIRO M, KOIKE K. Image-enhanced endoscopy-NBI, FICE, i-scan[J]. Gastroenterological Endoscopy, 2010, 52(9): 2665-2677.
    [4] YAMADA S, DOYAMA H, YAO K, et al. An efficient diagnostic strategy for small, depressed early gastric cancer with magnifying narrow-band imaging: a post-hoc analysis of a prospective randomized controlled trial[J]. Gastrointestinal Endoscopy, 2014, 79(1): 55-63. doi: 10.1016/j.gie.2013.07.008
    [5] ANG T L, FOCK K M, TEO E K, et al. The diagnostic utility of narrow band imaging magnifying endoscopy in clinical practice in a population with intermediate gastric cancer risk[J]. European Journal of Gastroenterology & Hepatology, 2012, 24(4): 362-367.
    [6] YAO K, ANAGNOSTOPOULOS G K, RAGUNATH K. Magnifying endoscopy for diagnosing and delineating early gastric cancer[J]. Endoscopy, 2009, 41(5): 462-467. doi: 10.1055/s-0029-1214594
    [7] YAO K, TAKAKI Y, MATSUI T, et al. Clinical application of magnification endoscopy and narrow-band imaging in the upper gastrointestinal tract: new imaging techniques for detecting and characterizing gastrointestinal neoplasia[J]. Gastrointestinal Endoscopy Clinics of North America, 2008, 18(3): 415-433. doi: 10.1016/j.giec.2008.05.011
    [8] MUTO M, YAO K, KAISE M, et al. Magnifying endoscopy simple diagnostic algorithm for early gastric cancer (MESDA-G)[J]. Digestive Endoscopy, 2016, 28(4): 379-393. doi: 10.1111/den.12638
    [9] YAO K, IWASHITA A, NAMBU M, et al. Nature of white opaque substance in gastric epithelial neoplasia as visualized by magnifying endoscopy with narrow-band imaging[J]. Digestive Endoscopy, 2012, 24(6): 419-425. doi: 10.1111/j.1443-1661.2012.01314.x
    [10] YAO K, IWASHITA A, TANABE H, et al. White opaque substance within superficial elevated gastric neoplasia as visualized by magnification endoscopy with narrow-band imaging: a new optical sign for differentiating between adenoma and carcinoma[J]. Gastrointestinal Endoscopy, 2008, 68(3): 574-580. doi: 10.1016/j.gie.2008.04.011
    [11] NAKAYAMA A, KATO M, TAKATORI Y, et al. How I do it: Endoscopic diagnosis for superficial non-ampullary duodenal epithelial tumors[J]. Digestive Endoscopy, 2020, 32(3): 417-424. doi: 10.1111/den.13538
    [12] HISABE T, YAO K, IMAMURA K, et al. White opaque substance visualized using magnifying endoscopy with narrow-band imaging in colorectal epithelial neoplasms[J]. Digestive Diseases and Sciences, 2014, 59(10): 2544-2549. doi: 10.1007/s10620-014-3204-5
    [13] KAWASAKI K, KURAHARA K, YANAI S, et al. Significance of a white opaque substance under magnifying narrow-band imaging colonoscopy for the diagnosis of colorectal epithelial neoplasms[J]. Gastrointestinal Endoscopy, 2015, 82(6): 1097-1104. doi: 10.1016/j.gie.2015.06.023
    [14] HARA Y, GODA K, HIROOKA S, et al. Association between endoscopic milk-white mucosa, epithelial intracellular lipid droplets, and histological grade of superficial non-ampullary duodenal epithelial tumors[J]. Diagnostics, 2021, 11(5): 769. doi: 10.3390/diagnostics11050769
    [15] YAMASAKI K, HISABE T, YAO K, et al. White opaque substance, a new optical marker on magnifying endoscopy: usefulness in diagnosing colorectal epithelial neoplasms[J]. Clinical Endoscopy, 2021, 54(4): 570-577. doi: 10.5946/ce.2020.205
    [16] UEO T, YONEMASU H, YAO K, et al. Histologic differentiation and mucin phenotype in white opaque substance-positive gastric neoplasias[J]. Endoscopy International Open, 2015, 3(6): E597-E604. doi: 10.1055/s-0034-1393177
    [17] OHTSU K, YAO K, MATSUNAGA K, et al. Lipid is absorbed in the stomach by epithelial neoplasms (adenomas and early cancers): a novel functional endoscopy technique[J]. Endoscopy International Open, 2015, 3(4): E318-E322. doi: 10.1055/s-0034-1392095
    [18] LIU X Q, WANG CH L, BAI J Y, et al. Hue-texture-embedded region-based model for magnifying endoscopy with narrow-band imaging image segmentation based on visual features[J]. Computer Methods and Programs in Biomedicine, 2017, 145: 53-66. doi: 10.1016/j.cmpb.2017.04.010
    [19] GANZ M, YANG X Y, SLABAUGH G. Automatic segmentation of polyps in colonoscopic narrow-band imaging data[J]. IEEE Transactions on Biomedical Engineering, 2012, 59(8): 2144-2151. doi: 10.1109/TBME.2012.2195314
    [20] FIGUEIREDO I N, PINTO L, FIGUEIREDO P N, et al. Unsupervised segmentation of colonic polyps in narrow-band imaging data based on manifold representation of images and Wasserstein distance[J]. Biomedical Signal Processing and Control, 2019, 53: 101577. doi: 10.1016/j.bspc.2019.101577
    [21] KASS M, WITKIN A, TERZOPOULOS D. Snakes: Active contour models[J]. International Journal of Computer Vision, 1988, 1(4): 321-331. doi: 10.1007/BF00133570
    [22] BRADLEY D, ROTH G. Adaptive thresholding using the integral image[J]. Journal of Graphics Tools, 2007, 12(2): 13-21. doi: 10.1080/2151237X.2007.10129236
    [23] WANG L, LI CH M, SUN Q S, et al. Active contours driven by local and global intensity fitting energy with application to brain MR image segmentation[J]. Computerized Medical Imaging and Graphics, 2009, 33(7): 520-531. doi: 10.1016/j.compmedimag.2009.04.010
    [24] CHAN T F, VESE L A. Active contours without edges[J]. IEEE Transactions on Image Processing, 2001, 10(2): 266-277. doi: 10.1109/83.902291
    [25] LI CH M, KAO C Y, GORE J C, et al. Implicit active contours driven by local binary fitting energy[C]. 2007 IEEE Conference on Computer Vision and Pattern Recognition, IEEE, 2007: 1-7.
    [26] JIANG X L, WU X L, XIONG Y, et al. Active contours driven by local and global intensity fitting energies based on local entropy[J]. Optik, 2015, 126(24): 5672-5677. doi: 10.1016/j.ijleo.2015.09.021
    [27] LI CH M, XU CH Y, GUI CH F, et al. Level set evolution without re-initialization: a new variational formulation[C]. 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05), IEEE, 2005: 430-436.
    [28] ZHANG L, PENG X G, LI G, et al. A novel active contour model for image segmentation using local and global region-based information[J]. Machine Vision and Applications, 2017, 28(1-2): 75-89. doi: 10.1007/s00138-016-0805-3
    [29] ZHANG W, NIU CH Y, YOU X H, et al. Endocytoscopic imaging system with high magnification and large field of view[J]. Acta Optica Sinica, 2021, 41(17): 1717001. doi: 10.3788/AOS202141.1717001
    [30] POGUE B W, PATTERSON M S. Review of tissue simulating phantoms for optical spectroscopy, imaging and dosimetry[J]. Journal of Biomedical Optics, 2006, 11(4): 041102. doi: 10.1117/1.2335429
    [31] RUSSELL B C, TORRALBA A, MURPHY K P, et al. LabelMe: A database and web-based tool for image annotation[J]. International Journal of Computer Vision, 2008, 77(1): 157-173.
  • 加载中
图(14) / 表(2)
计量
  • 文章访问数:  231
  • HTML全文浏览量:  136
  • PDF下载量:  79
  • 被引次数: 0
出版历程
  • 收稿日期:  2023-09-04
  • 修回日期:  2023-10-20
  • 网络出版日期:  2024-03-08

目录

/

返回文章
返回