Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

A Solution to the Next Best View Problem for Automated Surface Acquisition

Published: 01 October 1999 Publication History

Abstract

A solution to the next best view (NBV) problem for automated surface acquisition is presented. The NBV problem is to determine which areas of a scanner's viewing volume need to be scanned to sample all of the visible surfaces of an a priori unknown object and where to position/control the scanner to sample them. It is argued that solutions to the NBV problem are constrained by the other steps in a surface acquisition system and by the range scanner's particular sampling physics. A method for determining the unscanned areas of the viewing volume is presented. In addition, a novel representation, positional space (PS), is presented which facilitates a solution to the NBV problem by representing what must be and what can be scanned in a single data structure. The number of costly computations needed to determine if an area of the viewing volume would be occluded from some scanning position is decoupled from the number of positions considered for the NBV, thus reducing the computational cost of choosing one. An automated surface acquisition systems designed to scan all visible surfaces of an a priori unknown object is demonstrated on real objects.

References

[1]
R. Pito, “Automated Surface Acquisition Using Range Cameras,” PhD thesis, Univ. of Pennsylvania, Dept. of Computer Science, Philadelphia, 1997.
[2]
J. Aloimonos I. Weiss and A. Bandopadhay, “Active Vision,” Proc. First Int'l Conf. Computer Vision, pp. 35-54, London, June 1987.
[3]
R. Bajcsy, “Active Perception,” Proc. IEEE, vol. 76, pp. 996-1,005, Aug. 1988.
[4]
J. Kahn M. Klawe and D. Kleitman, “Traditional Galleries Require Fewer Watchmen,” Technical Report IBM Research Report RJ3021, 1980.
[5]
S. Xie T. Calvert and B. Bhattacharya, “Planning Views for the Incremental Construction of Body Models,” Proc. Int'l Conf. Pattern Recognition, pp. 154-157, 1986.
[6]
J. Miura and K. Ikeuchi, “Task-Oriented Generation of Visual Sensing Strategies,” Proc. Int'l Conf. Computer Vision, pp. 1,106-1,113, 1995.
[7]
K. Kemmotsu and T. Kanade, “Sensor Placement Design for Object Pose Determination with Three Light-Stripe Range Finders,” Proc. IEEE Int'l Conf. Robotics and Automation, pp. 1,357-1,364, San Diego, Calif., 1994.
[8]
S. Hutchinson and A. Kak, “Planning Sensing Strategies in a Robot Work Cell with Multi-Sensor Capabilities,” IEEE Trans. Robotics and Automation, vol. 5, pp. 765-783, Dec. 1989.
[9]
D. Wilkes and J. Tsotos, “Active Object Recognition,” Proc. Conf. Computer Vision and Pattern Recognition, pp. 136-141, June 1992.
[10]
L. Wixson, “Viewpoint Selection for Visual Search,” Proc. Conf. Computer Vision and Pattern Recognition, pp. 800-805, June 1994.
[11]
A. Johnson R. Hoffman J. Osborn and M. Herbert, “A System for Semi-Automatic Modeling of Complex Environments,” Proc. Int'l Conf. Recent Advances in 3-D Digital Imaging and Modeling, Ottawa, Ontario, Canada, May 1997.
[12]
G. Tarbox and S. Gottschlich, “Planning for Complete Sensor Coverage in Inspection,” Computer Vision and Image Understanding, vol. 61, pp. 84-111, Jan. 1995.
[13]
G. Tarbox and S. Gottschlich, “IVS: An Integrated Volumetric Inspection System,” Computer Vision and Image Understanding, vol. 61, pp. 430-444, May 1995.
[14]
C. Cowan and P. Kovesi, “Automated Sensor Placement from Vision Task Requirements,” IEEE Trans. Robotics and Automation, vol. 10, pp. 86-104, May 1988.
[15]
A. Marshall and D. Roberts, “Automatically Planning the Inspection of Three-Dimensional Objects Using Stereo Computer Vision,” Proc. SPIE Int'l Symp. Intelligent Systems and Advanced Manufacturing, 1995.
[16]
K. Tarabanis R. Tsai and P. Allen, “Analytical Characterization of the Feature Detectability Constraints of Resolution, Focus, and Field-of-View for Vision Sensor Planning,” CVGIP Image Understanding, vol. 59, pp. 340-358, May 1994.
[17]
F. Chaumette S. Boukir P. Bouthemy and D. Juvin, “Structure from Controlled Motion,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 18, no. 5, pp. 492-504, May 1996.
[18]
C. Smith and N. Papanikolopoulos, “Computation of Shape through Controlled Active Exploration,” Proc. IEEE Int'l Conf. Robotics and Automation, pp. 2,516-2,521, 1994.
[19]
K. Kutulakos and C. Dyer, “Global Surface Reconstruction by Purposive Control of Observer Motion,” Artificial Intelligence, vol. 78, nos. 1-2, pp. 147-177, 1994.
[20]
E. Marchand and F. Chaumette, “Controlled Camera Motions for Scene Reconstruction and Exploration,” Proc. Conf. Computer Vsision and Pattern Recognition, pp. 169-176, June 1996.
[21]
R. Curwen A. Blake and A. Zisserman, “Real-Time Visual Tracking for Surveillance and Path Planning,” Proc. European Conf. Computer Vision, pp. 879-883, 1992.
[22]
M. Daily J. Harris and K. Reiser, “An Operational Perception System for Cross-Country Navigation,” Proc. Conf. Computer Vision and Pattern Recognition, pp. 3-6, June 1988.
[23]
K. Tarabanis P. Allen and R. Tsai, “A Survey of Sensor Planning in Computer Vision,” IEEE Trans. Robotics and Automation, vol. 11, pp. 86-104, Feb. 1995.
[24]
C. Connolly, “The Determination of Next Best Views,” Proc. IEEE Int'l Conf. Robotics and Automation, pp. 432-435, 1985.
[25]
R. Pito and R. Bajcsy, “A Solution to the Next Best View Problem for Automated CAD Model Acquisition of Free-Form Objects Using Range Cameras,” Proc. SPIE Int'l Symp. Intelligent Systems and Advanced Manufacturing, 1995.
[26]
R. Pito, “A Sensor Based Solution to the Next Best View Problem,” Proc. Int'l Conf. Pattern Recognition, pp. 941-945, Aug. 1996.
[27]
J. Maver and R. Bajcsy, “Occlusions as a Guide for Planning the Next View,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 15, no. 5, pp. 417-433, May 1993
[28]
J. Banta Y. Zhen X. Wang G. Zhang M. Smith and M. Abidi, “A 'Best-Next-View' Algorithm for Three-Dimensional Scene Reconstruction Using Range Cameras,” Proc. SPIE Int'l Symp. Intelligent Systems and Advanced Manufacturing, 1995.
[29]
M. Milroy C. Bradley and G. Vickers, “Automated Laser Scanning Based on Orthogonal Cross Sections,” Machine Vision and Applications, vol. 9, pp. 106-118, 1996.
[30]
D. Papdopoulos-Orfanos and F. Schmitt, “Automation of a 3-D Camera-Laser Triangulation Sensor,” Proc. Fourth European Conf. Rapid Prototyping, Paris, Oct. 1995.
[31]
H. Zha K. Morooka T. Hasegawa and T. Nagata, “Active Modeling of 3-D Objects: Planning on the Next Best Pose (NBP) for Acquiring Range Images,” Proc. Int'l Conf. Recent Advances in 3-D Digital Imaging and Modeling, Ottawa, Ontario, Canada, May 1997.
[32]
X. Yuan, “A Mechanism of Automatic 3D Object Modeling,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 17, no. 3, Mar. 1995.
[33]
P. Whaite and F. Ferrie, “Autonomous Exploration—Driven by Uncertainty,” Proc. Conf. Computer Vision and Pattern Recognition, pp. 339-346, 1994.
[34]
M. Reed P. Allen and I. Stamos, “3-D Modeling from Range Imagery: An Incremental Method with a Planning Component,” Proc. Int'l Conf. Recent Advances in 3-D Digital Imaging and Modeling, Ottowa, Ontario, Canada, May 1997.
[35]
K. Tarabanis R. Tsai and A. Kaul, “Computing Occlusion-Free Viewpoints,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 18, no. 3, Mar. 1996.
[36]
D. Papdopoulos-Orfanos and F. Schmitt, “Automatic 3-D Digitization Using a Laser Rangefinder with a Small Field of View,” Proc. Int'l Conf. Recent Advances in 3-D Digital Imaging and Modeling, Ottawa, Ontario, Canada, May 1997.
[37]
R. Pito, “Characterization, Calibration, and Use of the Perceptron Laser Range Finder in a Controlled Environment,” Technical Report MS-CIS-95-05, Univ. of Pennsylvania GRASP Laboratory, Philadelphia, Jan. 1995.
[38]
I.S. Kweon R. Hoffman and E. Krotkov, “Experimental Characterization of the Perceptron Laser Rangefinder,” Technical Report CMU-RI-TR-91-1, Robotics Inst., Carnegie Mellon Univ., Jan. 1991.
[39]
P. Besl and N. McKay, “A Method for Registration of 3-D Shapes,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 14, no. 2, Feb. 1992
[40]
Y. Chen and G. Medioni, “Object Modeling by Registration of Multiple Range Images,” Image and Vision Computing, vol. 10, Apr. 1992.
[41]
B. Curless and M. Levoy, “A Volumetric Method for Building Complex Models from Range Images,” Computer Graphics Proc., Annual Conf. Series, SIGGRAPH, Aug. 1996.
[42]
A. Hilton A. Stoddart J. Illingworth and T. Windeatt, “Marching Triangles: Range Data Fusion for Complex Object Modeling,” Proc. Int'l Conf. Image Processing, vol. II, Sept. 1996.
[43]
M. Rutishauser M. Stricker and M. Trobina, “Merging Range Images of Arbitrarily Shaped Objects,” Proc. Conf. Computer Vision and Pattern Recognition, pp. 573-580, June 1994.
[44]
G. Turk and M. Levoy, “Zippered Polygon Meshes from Range Images,” Computer Graphics Proc., Ann. Conf. Series, SIGGRAPH, pp. 311-318, July 1994.
[45]
R. Bergevin M. Soucy H. Gagnon and D. Laurendeau, “Towards a General Multi-View Registration Technique,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 18, no. 5, pp. 540-547, May 1996.
[46]
R. Pito, “A Registration Aid,” Proc. Int'l Conf. Recent Advances in 3D Imaging and Modeling, May 1997.
[47]
Y. Chen and J. Ni, “Dynamic Calibration and Compensation of a 3-D Laser Radar Scanning System,” IEEE Trans. Robotics and Automation, vol. 9, no. 3, pp. 318-323, 1993.
[48]
R. Pito, “Mesh Integration Based on Commeasurement,” Proc. Int'l Conf. Image Processing, vol. II, pp. 397-400, Sept. 1996.
[49]
W.J. Schroeder A. Zarge and W. Lorensen, “Decimation of Triangle Meshes,” Computer Graphics, vol. 26, no. 2, pp. 65-70, 1992.
[50]
M. Levoy and P. Hanrahan, “Light Field Rendering,” Computer Graphics Proc., Annual Conf. Series, SIGGRAPH, Aug. 1996.
[51]
B. Horn, Robot Vision. Cambridge, Mass.: MIT Press, 1986.

Cited By

View all
  • (2024)A Tree-Based Next-Best-Trajectory Method for 3-D UAV ExplorationIEEE Transactions on Robotics10.1109/TRO.2024.342205240(3496-3513)Online publication date: 1-Jan-2024
  • (2024)Towards field deployment of MAVs in adaptive exploration of GPS-denied subterranean environmentsRobotics and Autonomous Systems10.1016/j.robot.2024.104663176:COnline publication date: 1-Jun-2024
  • (2023)A Survey on Active Simultaneous Localization and Mapping: State of the Art and New FrontiersIEEE Transactions on Robotics10.1109/TRO.2023.324851039:3(1686-1705)Online publication date: 1-Jun-2023
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image IEEE Transactions on Pattern Analysis and Machine Intelligence
IEEE Transactions on Pattern Analysis and Machine Intelligence  Volume 21, Issue 10
October 1999
141 pages
ISSN:0162-8828
Issue’s Table of Contents

Publisher

IEEE Computer Society

United States

Publication History

Published: 01 October 1999

Author Tags

  1. Active vision
  2. automated surface acquisition
  3. model acquisition.
  4. next best view
  5. range imaging
  6. reverse engineering
  7. sensor planning

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 04 Oct 2024

Other Metrics

Citations

Cited By

View all
  • (2024)A Tree-Based Next-Best-Trajectory Method for 3-D UAV ExplorationIEEE Transactions on Robotics10.1109/TRO.2024.342205240(3496-3513)Online publication date: 1-Jan-2024
  • (2024)Towards field deployment of MAVs in adaptive exploration of GPS-denied subterranean environmentsRobotics and Autonomous Systems10.1016/j.robot.2024.104663176:COnline publication date: 1-Jun-2024
  • (2023)A Survey on Active Simultaneous Localization and Mapping: State of the Art and New FrontiersIEEE Transactions on Robotics10.1109/TRO.2023.324851039:3(1686-1705)Online publication date: 1-Jun-2023
  • (2023)A global generalized maximum coverage-based solution to the non-model-based view planning problem for object reconstructionComputer Vision and Image Understanding10.1016/j.cviu.2022.103585226:COnline publication date: 1-Jan-2023
  • (2023)REF: A Rapid Exploration Framework for Deploying Autonomous MAVs in Unknown EnvironmentsJournal of Intelligent and Robotic Systems10.1007/s10846-023-01836-z108:3Online publication date: 20-Jun-2023
  • (2022)3D Move to See: Multi-perspective visual servoing towards the next best view within unstructured and occluded environments2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)10.1109/IROS40897.2019.8967918(3890-3897)Online publication date: 28-Dec-2022
  • (2022)A Double Branch Next-Best-View Network and Novel Robot System for Active Object Reconstruction2022 International Conference on Robotics and Automation (ICRA)10.1109/ICRA46639.2022.9811769(7306-7312)Online publication date: 23-May-2022
  • (2022)Bayesian Probabilistic Stopping Test and Asymptotic Shortest Time Trajectories for Object Reconstruction with a Mobile Manipulator RobotJournal of Intelligent and Robotic Systems10.1007/s10846-022-01696-z105:4Online publication date: 1-Aug-2022
  • (2021)Interactive Viewpoint Exploration for Constructing View-Dependent ModelsProceedings of the 14th ACM SIGGRAPH Conference on Motion, Interaction and Games10.1145/3487983.3488287(1-8)Online publication date: 10-Nov-2021
  • (2021)Exploration-RRT: A multi-objective Path Planning and Exploration Framework for Unknown and Unstructured Environments2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)10.1109/IROS51168.2021.9636243(3429-3435)Online publication date: 27-Sep-2021
  • Show More Cited By

View Options

View options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media