留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

三维点云数据超分辨率技术

毕勇 潘鸣奇 张硕 高伟男

毕勇, 潘鸣奇, 张硕, 高伟男. 三维点云数据超分辨率技术[J]. 中国光学(中英文), 2022, 15(2): 210-223. doi: 10.37188/CO.2021-0176
引用本文: 毕勇, 潘鸣奇, 张硕, 高伟男. 三维点云数据超分辨率技术[J]. 中国光学(中英文), 2022, 15(2): 210-223. doi: 10.37188/CO.2021-0176
BI Yong, PAN Ming-qi, ZHANG Shuo, GAO Wei-nan. Overview of 3D point cloud super-resolution technology[J]. Chinese Optics, 2022, 15(2): 210-223. doi: 10.37188/CO.2021-0176
Citation: BI Yong, PAN Ming-qi, ZHANG Shuo, GAO Wei-nan. Overview of 3D point cloud super-resolution technology[J]. Chinese Optics, 2022, 15(2): 210-223. doi: 10.37188/CO.2021-0176

三维点云数据超分辨率技术

基金项目: 2020年北京市落实中央引导地方科技发展专项(No. Z20111000430000)
详细信息
    作者简介:

    毕勇(1973—),男,黑龙江哈尔滨人,中国科学院理化技术研究所研究员,博士生导师,2004年于中国科学院物理研究所获得博士学位,主要从事激光应用技术方面的研究工作。Email:biyong@mail.ipc.ac.cn

    潘鸣奇(1996—),男,广东珠海人,中国科学院大学硕士研究生,2018年于哈尔滨工业大学获得学士学位,主要从事激光雷达三维重建方面的研究。E-mail:panmingqi19@mails.ucas.ac.cn

    张硕(1993—),女,黑龙江哈尔滨人,中国科学院理化技术研究所博士后,2021年于中国科学院大学获得博士学位,主要从事激光雷达方面的研究工作。Email:zhangshuo@mail.ipc.ac.cn

    高伟男(1983—),男,吉林白城人,中国科学院理化技术研究所高级工程师,2009年于吉林大学获博士学位,长期从事高功率激光、全色激光显示等激光技术与应用研究。E-mail:wngao@mail.ipc.ac.cn

  • 中图分类号: TP391.4

Overview of 3D point cloud super-resolution technology

Funds: Supported by Special Project of Central Government Guiding Local Scienceand Technology Development in Beijing 2020(No. Z20111000430000)
More Information
  • 摘要: 随着机器视觉技术的发展,如何准确、高效地对真实世界进行精确记录与建模已成为热点问题。由于硬件条件的限制,通常采集到的点云数据分辨率较低,无法满足实际应用需求,因此有必要对点云数据超分辨率技术进行研究。本文介绍三维点云数据超分辨率技术的意义、进展及评价方法,并对经典超分辨率算法和基于机器学习的超分辨率算法分别进行梳理,总结了目前方法的特点,指出了目前点云数据超分辨率技术中存在的主要问题及面临的挑战,最后展望了点云数据超分辨率技术的发展方向。

     

  • 图 1  PU-Net的网络示意图[39]

    Figure 1.  The architecture of PU-Net[39]

    图 2  MPU上采样模型[42]

    Figure 2.  Up-sampling model of MPU[42]

    图 3  PU-GAN的网络示意图[44]

    Figure 3.  The architecture of PU-GAN[44]

    图 4  PU-GCN的网络示意图[47]

    Figure 4.  The architecture of PU-GCN[47]

    表  1  均方误差比较

    Table  1.   RMSE comparison

    局部/全局数据集与倍数ArtMoebiusBooks
    基于局部信息边缘特征引导的JBUF[26]1.081.93
    基于局部信息改进的双边滤波器[27]1.932.451.632.061.471.81
    基于局部信息具有噪声感知的双边滤波[28]2.904.751.552.281.361.94
    基于局部信息基于引导图像的滤波器[29]2.403.322.032.601.822.31
    基于全局优化二阶TGV[30]1.292.060.901.380.751.16
    基于全局优化二阶TGV+边缘指示函数[31]1.211.930.811.320.651.07
    基于全局优化MRF[33]2.243.852.293.092.082.85
    基于全局优化改进的MRF[34]1.001.50
    基于全局优化改进的MRF[35]1.822.781.492.131.431.98
    下载: 导出CSV
  • [1] 李松泰. 三维激光扫描仪点云数据的应用研究[J]. 地矿测绘,2020,3(2):141-142.

    LI S T. Application of point cloud in 3D laser scanner[J]. Geological and Mineral Surveying and Mapping, 2020, 3(2): 141-142. (in Chinese)
    [2] 杜瑞建, 葛宝臻, 陈雷. 多视高分辨率纹理图像与双目三维点云的映射方法[J]. 中国光学,2020,13(5):1055-1064. doi: 10.37188/CO.2020-0034

    DU R J, GE B ZH, CHEN L. Texture mapping of multi-view high-resolution images and binocular 3D point clouds[J]. Chinese Optics, 2020, 13(5): 1055-1064. (in Chinese) doi: 10.37188/CO.2020-0034
    [3] 杜钦生, 李丹丹, 陈浩, 等. 结构光3D点云的PIN针针尖提取[J]. 液晶与显示,2021,36(9):1331-1340. doi: 10.37188/CJLCD.2020-0321

    DU Q SH, LI D D, CHEN H, et al. PIN tip extraction from 3D point cloud of structured light[J]. Chinese Journal of Liquid Crystals and Displays, 2021, 36(9): 1331-1340. (in Chinese) doi: 10.37188/CJLCD.2020-0321
    [4] 吴坤帅, 魏仲慧, 何昕, 等. 基于笔划三维深度特征的签名识别[J]. 液晶与显示,2019,34(10):1013-1020. doi: 10.3788/YJYXS20193410.1013

    WU K SH, WEI ZH H, HE X, et al. Signatures recognition based on strokes 3D depth feature[J]. Chinese Journal of Liquid Crystals and Displays, 2019, 34(10): 1013-1020. (in Chinese) doi: 10.3788/YJYXS20193410.1013
    [5] 谭红春, 耿英保, 杜炜. 一种高效的人脸三维点云超分辨率融合方法[J]. 光学技术,2016,42(6):501-505.

    TAN H CH, GENG Y B, DU W. An efficient method of face super-resolution fusion using 3D cloud points[J]. Optical Technique, 2016, 42(6): 501-505. (in Chinese)
    [6] 张银, 任国全, 程子阳, 等. 三维激光雷达在无人车环境感知中的应用研究[J]. 激光与光电子学进展,2019,56(13):130001.

    ZHANG Y, REN G Q, CHENG Z Y, et al. Application research of there-dimensional LiDAR in unmanned vehicle environment perception[J]. Laser &Optoelectronics Progress, 2019, 56(13): 130001. (in Chinese)
    [7] 王世峰, 戴祥, 徐宁, 等. 无人驾驶汽车环境感知技术综述[J]. 长春理工大学学报(自然科学版),2017,40(1):1-6.

    WANG SH F, DAI X, XU N, et al. Overview on environment perception technology for unmanned ground vehicle[J]. Journal of Changchun University of Science and Technology (Natural Science Edition), 2017, 40(1): 1-6. (in Chinese)
    [8] 杨必胜, 梁福逊, 黄荣刚. 三维激光扫描点云数据处理研究进展、挑战与趋势[J]. 测绘学报,2017,46(10):1509-1516. doi: 10.11947/j.AGCS.2017.20170351

    YANG B SH, LIANG F X, HUANG R G. Progress, challenges and perspectives of 3D LiDAR point cloud processing[J]. Acta Geodaetica et Cartographica Sinica, 2017, 46(10): 1509-1516. (in Chinese) doi: 10.11947/j.AGCS.2017.20170351
    [9] 张绍阳, 侯旭阳, 崔华, 等. 利用激光散斑获取深度图[J]. 中国光学,2016,9(6):633-641.

    ZHANG SH Y, HOU X Y, CUI H, et al. Depth image acquisition using laser speckle[J]. Chinese Optics, 2016, 9(6): 633-641. (in Chinese)
    [10] 卜禹铭, 杜小平, 曾朝阳, 等. 无扫描激光三维成像雷达研究进展及趋势分析[J]. 中国光学,2018,11(5):711-727. doi: 10.3788/co.20181105.0711

    BU Y M, DU X P, ZENG ZH Y, et al. Research progress and trend analysis of non-scanning laser 3D imaging radar[J]. Chinese Optics, 2018, 11(5): 711-727. (in Chinese) doi: 10.3788/co.20181105.0711
    [11] 苏东, 张艳, 曲承志, 等. 基于彩色图像轮廓的深度图像修复方法[J]. 液晶与显示,2021,36(3):456-464. doi: 10.37188/CJLCD.2020-0222

    SU D, ZHANG Y, QU CH ZH, et al. Depth image restoration method based on color image contour[J]. Chinese Journal of Liquid Crystals and Displays, 2021, 36(3): 456-464. (in Chinese) doi: 10.37188/CJLCD.2020-0222
    [12] FOIX S, ALENYA G, TORRAS C. Lock-in Time-of-Flight (ToF) cameras: a survey[J]. IEEE Sensors Journal, 2011, 11(9): 1917-1926. doi: 10.1109/JSEN.2010.2101060
    [13] SCHUON S, THEOBALT C, DAVIS J, et al. . High-quality scanning using time-of-flight depth superresolution[C]. Proceedings of 2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, IEEE, 2008: 1-7.
    [14] BALURE C S, KINI M R. Depth image super-resolution: a review and wavelet perspective[C]. Proceedings of International Conference on Computer Vision and Image Processing, Springer, 2017: 543-555.
    [15] 肖宿, 韩国强, 沃焱. 数字图像超分辨率重建技术综述[J]. 计算机科学,2009,36(12):8-13,54. doi: 10.3969/j.issn.1002-137X.2009.12.003

    XIAO S, HAN G Q, WO Y. Survey of digital image super resolution reconstruction technology[J]. Computer Science, 2009, 36(12): 8-13,54. (in Chinese) doi: 10.3969/j.issn.1002-137X.2009.12.003
    [16] HARRIS J L. Diffraction and resolving power[J]. Journal of the Optical Society of America, 1964, 54(7): 931-936. doi: 10.1364/JOSA.54.000931
    [17] GOODMAN J W. Introduction to Fourier Optics[M]. San Francisco: McGraw-Hill, 1968.
    [18] 谢海平, 谢凯利, 杨海涛. 图像超分辨率方法研究进展[J]. 计算机工程与应用,2020,56(19):34-41.

    XIE H P, XIE K L, YANG H T. Research progress of image super-resolution methods[J]. Computer Engineering and Applications, 2020, 56(19): 34-41. (in Chinese)
    [19] VAN OUWERKERK J D. Image super-resolution survey[J]. Image and Vision Computing, 2006, 24(10): 1039-1052. doi: 10.1016/j.imavis.2006.02.026
    [20] 王浩, 张叶, 沈宏海, 等. 图像增强算法综述[J]. 中国光学,2017,10(4):438-448. doi: 10.3788/co.20171004.0438

    WANG H, ZHANG Y, SHEN H H, et al. Review of image enhancement algorithms[J]. Chinese Optics, 2017, 10(4): 438-448. (in Chinese) doi: 10.3788/co.20171004.0438
    [21] STARK H, OSKOUI P. High-resolution image recovery from image-plane arrays, using convex projections[J]. Journal of the Optical Society of America A, 1989, 6(11): 1715-1726. doi: 10.1364/JOSAA.6.001715
    [22] GEVREKCI M, PAKIN K. Depth map super resolution[C]. Proceedings of the 18th IEEE International Conference on Image Processing, IEEE, 2011: 3449-3452.
    [23] PATTI A J, ALTUNBASAK Y. Artifact reduction for set theoretic super resolution image reconstruction with edge adaptive constraints and higher-order interpolants[J]. IEEE Transactions on Image Processing, 2001, 10(1): 179-186. doi: 10.1109/83.892456
    [24] TOMASI C, MANDUCHI R. Bilateral filtering for gray and color images[C]. Sixth International Conference on Computer Vision, IEEE, 1998: 839-846.
    [25] KOPF J, COHEN M F, LISCHINSKI D, et al. Joint bilateral upsampling[J]. ACM Transactions on Graphics, 2007, 26(3): 96-es. doi: 10.1145/1276377.1276497
    [26] 涂义福, 张旭东, 张骏, 等. 基于边缘特征引导的深度图像超分率重建[J]. 计算机应用与软件,2017,34(2):220-225. doi: 10.3969/j.issn.1000-386x.2017.02.039

    TU Y F, ZHANG X D, ZHANG J, et al. Depth map super-resolution reconstruction based on the edge feature-guided[J]. Computer Applications and Software, 2017, 34(2): 220-225. (in Chinese) doi: 10.3969/j.issn.1000-386x.2017.02.039
    [27] YANG Q X, YANG R G, DAVIS J, et al.. Spatial-depth super resolution for range images[C]. Proceedings of 2007 IEEE Conference on Computer Vision and Pattern Recognition, IEEE, 2007: 1-8.
    [28] CHAN D, BUISMAN H, THEOBALT C, et al.. A Noise-Aware Filter for Real-Time Depth Upsampling[C]. Multi-camera & Multi-modal Sensor Fusion Algorithms and Applications, Marseille, France: M2SFA2, 2008: inria-00326784.
    [29] HE K M, SUN J, TANG X O. Guided image filtering[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2013, 35(6): 1397-1409. doi: 10.1109/TPAMI.2012.213
    [30] FERSTL D, REINBACHER C, RANFTL R, et al. . Image guided depth upsampling using anisotropic total generalized variation[C]. 2013 IEEE International Conference on Computer Vision, IEEE, 2013: 993-1000.
    [31] 邸维巍, 张旭东, 胡良梅, 等. 彩色图约束的二阶广义总变分深度图超分辨率重建[J]. 中国图象图形学报,2014,19(8):1162-1167. doi: 10.11834/jig.20140807

    DI W W, ZHANG X D, HU L M, et al. Depth image super-resolution based on second-order total generalized variation constrained by color image[J]. Journal of Image and Graphics, 2014, 19(8): 1162-1167. (in Chinese) doi: 10.11834/jig.20140807
    [32] 王宇, 朴燕, 孙荣春. 结合同场景彩色图像的深度图超分辨率重建[J]. 光学学报,2017,37(8):0810002. doi: 10.3788/AOS201737.0810002

    WANG Y, PIAO Y, SUN R CH. Depth image super-resolution construction combined with high-resolution color image of the same scene[J]. Acta Optica Sinica, 2017, 37(8): 0810002. (in Chinese) doi: 10.3788/AOS201737.0810002
    [33] DIEBEL J, THRUN S. An application of markov random fields to range sensing[C]. Proceedings of the 18th Conference on Neural Information Processing Systems, ACM, 2005: 291-298.
    [34] 陈金奇, 李榕. 一种基于改进MRF的深度图超分辨率重建[J]. 微处理机,2017,38(4):60-63,71. doi: 10.3969/j.issn.1002-2279.2017.04.015

    CHEN J Q, LI R. A depth map super-resolution reconstruction based on improved markov random field[J]. Microprocessors, 2017, 38(4): 60-63,71. (in Chinese) doi: 10.3969/j.issn.1002-2279.2017.04.015
    [35] PARK J, KIM H, TAI Y W, et al.. High quality depth map upsampling for 3D-TOF cameras[C]. 2011 International Conference on Computer Vision, IEEE, 2011: 1623-1630.
    [36] SCHARSTEIN D, PAL C. Learning conditional random fields for stereo[C]. 2007 IEEE Conference on Computer Vision and Pattern Recognition, IEEE, 2007: 1-8.
    [37] DONG CH, LOY C C, HE K M, et al.. Learning a deep convolutional network for image super-resolution[C]. Proceedings of the 13th European Conference on Computer Vision, Springer, 2014: 184-199.
    [38] DONG CH, LOY C C, HE K M, et al. Image super-resolution using deep convolutional networks[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2016, 38(2): 295-307. doi: 10.1109/TPAMI.2015.2439281
    [39] YU L Q, LI X ZH, FU C W, et al.. PU-Net: point cloud upsampling network[C]. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, IEEE, 2018: 2790-2799.
    [40] CHARLES R Q, SU H, KAICHUN M, et al.. PointNet: deep learning on point Sets for 3D classification and segmentation[C]. 2017 IEEE Conference on Computer Vision and Pattern Recognition, IEEE, 2017: 77-85.
    [41] QI C R, YI L, SU H, et al.. PointNet++: deep hierarchical feature learning on point sets in a metric space[C]. Proceedings of the 31st International Conference on Neural Information Processing Systems, ACM, 2017: 5105-5114.
    [42] WANG Y F, WU SH H, HUANG H, et al.. Patch-based progressive 3D point set upsampling[C]. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition, IEEE, 2019: 5951-5960.
    [43] GOODFELLOW I, POUGET-ABADIE J, MIRZA M, et al. Generative adversarial networks[J]. Communications of the ACM, 2020, 63(11): 139-144. doi: 10.1145/3422622
    [44] LI R H, LI X ZH, FU C W, et al.. PU-GAN: a point cloud upsampling adversarial network[C]. 2019 IEEE/CVF International Conference on Computer Vision, IEEE, 2019: 7202-7211.
    [45] YANG Y Q, FENG CH, SHEN Y R, et al.. FoldingNet: point cloud auto-encoder via deep grid deformation[C]. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, IEEE, 2018: 206-215.
    [46] KIPF T N, WELLING M. Semi-supervised classification with graph convolutional networks[C]. 5th International Conference on Learning Representations, OpenReview. net, 2017.
    [47] QIAN G CH, ABUALSHOUR A, LI G H, et al.. PU-GCN: point cloud upsampling using graph convolutional networks[C]. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition, IEEE, 2021: 11678-11687.
    [48] WU H, ZHANG J, HUANG K. Point Cloud Super Resolution with Adversarial Residual Graph Networks[J]. arXiv preprint, 2019: arXiv: 1908.02111.
    [49] YANG L B, WANG SH SH, MA S W, et al.. HiFaceGAN: face renovation via collaborative suppression and replenishment[C]. Proceedings of the 28th ACM International Conference on Multimedia, ACM, 2020: 1551-1560.
    [50] SHAN T X, WANG J K, CHEN F F, et al. Simulation-based lidar super-resolution for ground vehicles[J]. Robotics and Autonomous Systems, 2020, 134: 103647. doi: 10.1016/j.robot.2020.103647
  • 加载中
图(4) / 表(1)
计量
  • 文章访问数:  2769
  • HTML全文浏览量:  2083
  • PDF下载量:  409
  • 被引次数: 0
出版历程
  • 收稿日期:  2021-10-08
  • 修回日期:  2021-10-28
  • 录用日期:  2021-12-20
  • 网络出版日期:  2021-12-24
  • 刊出日期:  2022-03-21

目录

    /

    返回文章
    返回