Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
A robotic vision system to measure tree traits arXiv:1707.05368v2 [cs.RO] 18 Dec 2017 Amy Tabb1 and Henry Medeiros2 Abstract— The autonomous measurement of tree traits, such as branching structure, branch diameters, branch lengths, and branch angles, is required for tasks such as robotic pruning of trees as well as structural phenotyping. We propose a robotic vision system called the Robotic System for Tree Shape Estimation (RoTSE) to determine tree traits in field settings. The process is composed of the following stages: image acquisition with a mobile robot unit, segmentation, reconstruction, curve skeletonization, conversion to a graph representation, and then computation of traits. Quantitative and qualitative results on apple trees are shown in terms of accuracy, computation time, and robustness. Compared to ground truth measurements, the RoTSE produced the following estimates: branch diameter (mean-squared error 0.99 mm), branch length (mean-squared error 45.64 mm), and branch angle (mean-squared error 10.36 degrees). The average run time was 8.47 minutes when the voxel resolution was 3 mm3 . the genotype. Its phenotype is the result of its genotype interacting with the environment. In structural phenotyping, the structure of plants is sensed in order to build a map between genotype and phenotype, a field sometimes referred to as phenomics [2]. While genotyping has become increasingly more automated, there has been little progress in the automation of phenotyping, particularly for structural traits acquired in outdoor scenarios. For tree crops, the state of the art consists of taking a sample of branches and manually measuring them with a flexible tape and a protractor, a laborintensive and error-prone process. An automated system would allow greater numbers of trees to be measured in a more timely fashion, without the need for extra labor. I. INTRODUCTION This paper describes a system for autonomously sensing and describing the architecture of leafless trees in field settings. This system is particularly useful for automation tasks related to managing fruit trees. The most natural task for this system would be that of dormant pruning. In the dormant pruning task in the present day, workers selectively remove branches from fruit trees in order to optimize fruit production. In some settings, they use pruners (sometimes called loppers) and ladders. In others, they use pneumatically or battery-powered shears and ride on platforms. However the configuration, the task is still manual, and requires humans to perform arduous work under harsh weather conditions since dormant pruning must be carried out during the winter months. There has been significant interest in industry in the automation of dormant pruning. In order to do so, it is necessary to autonomously detected the branching structure of trees, as pruning rules developed by horticulturists rely on specific measurements of the structure of the tree [1]. An alternative and equally challenging application for agricultural robotic vision systems is that of structural phenotyping. An organism contains genetic information, called This paper’s citation information is: A. Tabb and H. Medeiros, “A robotic vision system to measure tree traits,” 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, BC, Canada, 2017, pp. 6005-6012. doi: 10.1109/IROS.2017.8206497 Mention of trade names or commercial products in this publication is solely for the purpose of providing specific information and does not imply recommendation or endorsement by the U.S. Department of Agriculture. USDA is an equal opportunity provider and employer. A. Tabb acknowledges the support of US National Science Foundation grant number IOS-1339211. 1 USDA-ARS-AFRS Kearneysville, West Virginia, USA amy.tabb@ars.usda.gov 2 Marquette University, Electrical and Computer Engineering, Milwaukee, Wisconsin, USA henry.medeiros@marquette.edu Fig. 1. [Best viewed in color.] Robotic System for Tree Shape Estimation (RoTSE). The system consists of a Denso VS-6577G industrial arm and a generator mounted to a small truck. A. Contributions This paper describes a robotic system to determine tree structure autonomously (see Figure 1), which we denote as the Robotic System for Tree Shape Estimation (RoTSE) for the remainder of this paper. Its advantages over the state of the art are: 1) This system is able to capture the following tree measurements with known accuracy: branch diameter (mean-squared error 0.99mm), branch length (meansquared error 45.64mm), and branch angle (meansquared error 10.36 degrees). 2) The system is able to determine tree shape and traits in 8.47 minutes on average for small trees and does not require human intervention or hand-tuning. II. R ELATED W ORK A variety of sensors and approaches have been used to sense tree structure. For a detailed review relative to plant phenotyping, please see Li et al. [3]. The sensors used are divided into the following categories: motion trackers, laser scanners, depth cameras, and color cameras. Motion trackers can be used to semi-automatically determine the locations of rigid, thick branches by manually positioning the sensors at the branch junction points of the tree [4], [5], [6]. Although this approach is accurate, it is labor-intensive and is restricted to measuring the positions and orientations of branches. Tree measurement systems based on laser scanners [7], [8] can measure the trees in a more automated way, but their resolution is dependent on the distance to the branches. Hence, accurately and robustly estimating tree traits can become challenging, particularly for large point clouds. In addition, these systems are unable to utilize texture information from the trees to carry out reconstruction. Structured-light depth cameras such as the Microsoft Kinect have been used to create models and estimate traits on landscape trees in [9], but they are limited to situations in which there is not direct sunlight exposure. More recent methods that rely on time-of-flight (ToF) RGBD sensors, have been showing promising results [10], [11], [12], [13], [14]. Although these methods overcome some limitations of lidar-based systems such as the inability to utilize texture, their resolution is also a function of distance and hence it can be difficult to apply them to larger structures. This work may be considered most close to the wine grape pruner of Botterill et al. [15]. In [15], the vision component consists of a trinocular stereo color camera rig. The three-dimensional structure of the vines and support structures is determined by segmenting images, estimating correspondences, and incrementally building a threedimensional model of the row via bundle adjustment as the unit moves down the row. While we also use color cameras and segment target regions in images, we capture many more than three camera positions from calibrated poses because of our use of a high-accuracy industrial robot (in this work, 56 positions of two cameras are used), and as a result do not require bundle adjustment. This section describes the hardware and software components of the system for tree shape estimation as well as their interdependencies and corresponding design choices. The system was designed based on the assumption that its robotic and vision components need to operate in an orchard setting, where trees are usually planted in rows and may or may not be supported by a trellis (see Figure 2). The process for data acquisition is to park the robot unit such that the robot and cameras are facing the tree, and such that the background unit (Figure 4b) is behind the tree. Then the robot performs a series of movements within its workspace, acquiring images of the trees from a range of viewpoints. When the movements are complete, both units are moved to the next tree. The images acquired at each stop of the mobile robot unit (MRU) are then provided as inputs to the software pipeline, which is responsible for computing all the tree traits observed in the image set. As explained in detail in Section III-B, the software pipeline consists of six completely autonomous steps. A. Hardware components The physical system consists of two units: the MRU, which carries the robotic arm with cameras mounted on the end-effector, and a background unit. The MRU consists of a small truck, with a robotic manipulator and a generator mounted on it, as shown in Figure 1. The robot used is a Denso VS-6577G industrial arm with a 850mm reach. The end-effector contains two color cameras (Figure 4a) with 8mm lenses; this choice was made so that multiple images could be captured at each stop of the robot, in order to minimize data acquisition time. The robot-camera calibration parameters are determined according to a modification of the calibration method for multiple cameras given in [17] with the implementation available in [18]. The purpose of the background unit (Figure 4b) is to shield the cameras’ view from trees that are not being modeled at the current time. As previously mentioned, orchards are planted in rows, so when acquiring images of one leafless tree, other trees are visible. The color of the background unit is blue, since that color is not naturally found in the orchard setting and hence facilitates the segmentation of the trees from the images. III. S YSTEM DESCRIPTION B. Software components As previously mentioned, at each stop of the MRU, the robot acquires a series of image pairs (i.e., one image per camera) at a number of predefined positions.1 The sequence of images is fixed, and is determined empirically based on expected tree sizes and the available robot workspace and reach prior to experiments. The output after each stop of the MRU is a sequence of n measurements, M = {(I1c , p1 ) , (I2c , p2 ) , ..., (Inc , pn )}c=1,2 , Fig. 2. [Best viewed in color.] Example of a typical apple orchard. Trees are planted in rows, and supported with trellis. (1) 1 A video of this process is shown at http://coviss.org/ tabbmedeiros_rotse_iros17/. (b) (a) (c) (d) (e) (f) Fig. 3. [Best viewed in color.] (a) Full image of the tree acquired by a consumer camera. This image is not used in the shape estimation process. (b) Image of the tree acquired by one of the cameras mounted on the robot. (c) Silhouette probability map of the image in 3b. (d) Reconstruction of the tree volume using the silhouette probability maps. (e) Curve skeleton computed from the reconstructed tree volume. (f) Graph structure of the curve skeleton – vertices are represented as spheres and edges as straight lines. Snapshots of the three-dimensional object files in this figure, and throughout this document were produced with the Meshlab viewer [16]. (a) (b) Fig. 4. [Best viewed in color.] (a) Close-up of the cameras and endeffector design. (b) The background unit, which consists of a trailer for the vertical section, as well as a ground covering. where Ijc is the j-th image acquired by camera c and pj is the robot pose (i.e., position and orientation) at position j. Algorithm 1 summarizes the processing steps carried out after each stop of the MRU. Figure 3 shows an overview of the outputs of steps 2 to 5 of the algorithm. In the following sections, we explain each step in detail. 1) Step 1: External camera calibration parameters computation: The robot and two cameras are calibrated according to [17], which results in three 4 × 4 homogeneous transformation matrices (HTMs). The first transformation is the transformation from the robot base to the world coordinate system, denoted by X. The second and third transformations concern transformations from the end-effector to the cameras. The transformation from the end-effector to a camera c is Zc , and since we have two cameras, c = 1, 2.2 Given a particular image Ijc , we compute the HTM that corresponds to the external camera calibration parameters, meaning the rotation matrix and translation vector that trans2 Note that these definitions are the inverse from much of the literature on robot-world, hand-eye calibration, and was done for specific results as outlined in [17]. Algorithm 1 Computation of tree traits from measurements obtained by the MRU Input: Set of MRU measurements M. Output: Graph representation of tree traits. 1: Compute the external camera calibration parameters Rcj , tcj for each image Ijc . 2: Segment tree versus non-tree regions according to the method described in [19]. 3: Reconstruct the tree volume using a shape-frominconsistent-silhouette method detailed in [20], [21]. 4: Compute the curve skeleton from the noisy reconstruction in 3 according to the method in [22]. 5: Convert the curve skeleton to a graph representation. 6: Compute the features of interest from skeleton representation. forms the world coordinate system to the camera’s coordinate system for that stop of the robot arm. We denote this matrix Acj . Let the transformation from the robot base to the robot hand when image Ijc was acquired be Bcj , then Acj is: Acj = Zc Bcj X−1 (2) Internal camera calibration parameters, including those for distortion, are computed for each camera as a precursor to the robot-world, hand-eye calibration procedure. In this manner, internal and external calibration information is available for all images. 2) Step 2: Segmentation: The method we use for reconstruction requires silhouettes or silhouette maps. Although we use the background unit to control some aspects of the scene, illumination changes in the dynamic outdoor environment produces challenges when it comes to segmentation. We use a method for segmentation that is described in [19]. The method is a color-based one and locates probable blue background pixels and computes the optimal parameters for producing silhouette probability maps. Silhouette probability maps are computed for each image Ijc . 3) Step 3: Reconstruction: A method for shape-frominconsistent-silhouette reconstruction is given in [20]. In that work, it is assumed that each image has camera calibration information, and the reconstruction problem is formulated as a pseudo-Boolean optimization problem in a voxelized search area. In [21], the run time of the method of [20] was reduced by introducing a hierarchical version of the search; voxels start at a certain resolution, and the algorithm is run. Then, voxels that were marked as occupied are included in the search area for the next round, as are neighbors within a certain distance, and these voxels are divided by an octtree. This process continues according to the desired ending voxel size and number of octtree divisions, as set by the user. 4) Step 4: Skeletonization: The resulting reconstruction from step 3 is characterized by a noisy surface, and depending on the ending voxel size, it can also be sparse. The curve skeleton is considered as a locally one-dimensional curve that is roughly in the center of the object. While there are many different approaches for computing the curve skeleton, we were not able to find any that could deal with the noisy nature and sparseness of our data. Consequently, we compute the curve skeleton from the reconstruction in step 3 using our approach described in [22]. Algorithm 2 Conversion from curve skeleton to directed graph representation Input: Set of skeleton segments S. Output: Directed graph G = (V, E) and root v0 representing the tree structure. 1: V = {}, E = {} 2: v0 = endpoint of S closest to the ground plane. 3: Q = {v0 }, where Q is a queue. 4: while |Q| > 0 do 5: va = first element of Q and is removed from Q. 6: S = edges from S that include va as an endpoint. 7: S = S − S. 8: for each segment sj ∈ S do 9: sj has two endpoints, va and vb . 10: Create edge ej : (va , vb ) 11: vb added to Q if it is not already present. 12: E = E ∪ {ej }. 5) Step 5: Graph representation: The output of the curve skeletonization step from step 4 is a set of undirected segments S, each of which is made up of a set of a voxels. Each segment is indexed by its endpoints va and vb . This segment set is converted to a directed graph representation G = (V, E) as described by Algorithm 2. Initially, V = {v0 } and E is empty. The root of the graph, v0 in line 2 is selected by sorting the endpoints of the segments in S and choosing the one that is closest to the ground plane in the world coordinate system. v0 then becomes the first member of queue Q. The graph is constructed from the segments by iteratively selecting the first item va from Q, finding any segments in S incident to va , creating an edge (va , vb ), and inserting it in E. The algorithm progresses until all vertices and edges have been explored. The result is a directed graph where v0 , in this application, represents the intersection of the tree with the ground. Vertices with out-degree of zero represent terminal tips. 6) Step 6: Computation of features of interest: The outputs of the prior steps are used to compute the features of interest: branch junction locations, branch diameters, branch segment lengths, and branch angles. For a visual illustration of the features see Figure 5. Branch junction locations are determined from edge vertices with out-degree greater than zero. Vertices are centered on voxels, so determining the junction location consists of reading off the voxel’s location within the voxel grid from the reconstruction step. The processes to compute the branch diameter, branch length, and branch angle are described with the help of Figure 6. In this discussion, we assume that we are computing the features of the child branch, which is represented by the edge e = (v0 , v1 ), where v0 corresponds to the branch junction location, which is determined as explained above, and v1 is the other endpoint of the child branch. Once v0 is known, the branch diameter is computed as follows. Branch junction location is the intersection of a branch with its parent branch inside the physical tree (Figure 6a), but we aim to find the branch diameter at a given distance from the branch junction location so that it reflects the horticultural practice of measuring branch diameters as in Figure 5b. This process is shown in Algorithm 3. As part of the skeletonization step, distance labels di are computed from every occupied voxel to the closest empty voxel using a linear time algorithm [23]. In addition, the skeletonization step generates the ordered set P, which corresponds to the curve segment vertices between v0 and v1 in order. First, we let d0 be the distance label representing the smallest distance to the surface of the reconstruction volume from v0 . We make the assumption that the tree shape can be represented by a collection of spheres, and that d0 is the radius of such a sphere centered at v0 . This traces out a sphere represented by the light red circle in Figure 6b. From there, we determine the voxel Pa on P that is d0 distance away from v0 (Lines 4 - 5). We then compute the average distance label of the next nd vertices to produce an average branch radius (Line 6 and Figures 6c - 6d). nd is a parameter set by the user which has the purpose of removing locally occurring noise in the distance labels. The diameter measurement dˆe is given by twice the radius. Branch angles are determined with another user parameter, dangle . It represents the distance from the branch junction where the branch angle should be measured, since branches are not typically straight in all regions. Similarly to Lines 4 - 5 in Algorithm 3, the voxels that are dangle away on the paths from v0 are located on the parent and child branches (represented by red and blue lines, respectively, in Figure 6d). Once these voxels are determined, vectors are generated and the angle between them is computed. The branch length computation sums the Euclidean dis- Location ✛ Branch junction location ✛ of diameter x, y, z measurement (a) (b) Fig. 5. (c) |P|−1 X (d) [Best viewed in color.] An illustration of the features computed by the RoTSE. tances in between voxels on the path from v0 to v1 , so that branch length l is as follows: l= ❍ ❨ ✯ ✟ ❍❍ ✟ Branch length✟ ❅ ■ ❅ Branch angle kPi − Pi+1 k (3) i=1 where Pi is the ith vertex in P. Algorithm 3 Branch diameter determination Input: Path from v0 to v1 P, distance labels di , parameter nd Output: Branch diameter estimate dˆe 1: d0 is the distance label of v0 2: a = 1 3: Pa is the ath vertex in P. 4: while kPa − v0 k < d0 do 5: a = a + 1  Pa+nd ˆ d 6: de = 2 × n1 P j j=a d to equipment such as used in [4], [5], [6]. Hence, we omit that specific parameter from our evaluation. Figure 7 shows the two trees used in the accuracy evaluation. The trees for this quantitative comparison were simplified so that they consisted of only a trunk and primary branches; higher-order branches were removed. The labels in the figures correspond to the branch numbers and their corresponding diameters (other parameters are omitted from the figure for clarity). When the manual measurements were acquired, each branch was given a label and the measurements were noted on a photograph of the tree. Visual inspection with a 3D model viewer was used to determine the mapping from the RoTSE labeling and the manually labeled branches so that measurement error could be reported. Figure 8 shows the accuracy results for the trees shown in Figure 7. Table I summarizes the error statistics. TABLE I S UMMARY OF THE ACCURACY OF ROTSE FOR 2 TREES . MSE IS THE MEAN SQUARED ERROR AND IV. E XPERIMENTS We evaluate the performance of the RoTSE using data collected from twelve trees (denoted Trees A-L in subsequent sections) in the USDA-AFRS Orchard in Kearneysville, West Virginia. Performance is evaluated in terms of accuracy, computation time, and robustness. In all of our experiments, the trees were reconstructed using 56 1900 × 1200 pixel images. For the reconstruction step, the initial voxel size was set at 12mm3 and the final voxel size at 3mm3 . The search region was 82 million voxels, and it was the same for all of the trees. For the measurement of features step, nd = 5 and dangle = 50mm. All of the results shown in this paper were generated on a workstation with one 12-core processor and 192 GB of RAM. The following subsections describe the experimental procedures in detail. MSE STD STD IS THE STANDARD DEVIATION OF THE ERROR . Diameter (mm) 0.99 2.85 Length (mm) 45.64 124.51 Angle (degrees) 10.36 32.18 B. Computation time The computation time for each step of the proposed approach is shown in Table II for all the twelve trees. The average time to compute the tree features is approximately 8.5 minutes in addition to approximately 1 minute for data acquisition and the time to move the platform to the next tree. Note that feature computation for a given tree can be performed in parallel with data acquisition for the following tree so that the overall time can be amortized as multiple trees are measured. A. Accuracy C. Robustness Accuracy is evaluated by comparing the tree traits computed by our system with manual measurements obtained from trees in the orchard. Since ground truth generation is labor-intensive, we limit our manual measurements to two trees (Trees A and G). Our evaluation focuses on the following parameters (see Figure 5): number of branches, branch diameter, branch length, and branch angle. Branch location is difficult to measure in practice without resorting We qualitatively evaluate the robustness of our system using data from twelve trees in the orchard. Figures 3 and 911 show some examples of the trees and their representation by RoTSE. The three-dimensional nature of the trees can be seen in the video in our Supplementary Materials.3 In 3 We intend to publish this video in an open access format at the National Agriculture Library’s Ag Data Commons, for instance, so that it can serve as a companion to this paper. Child branch ❄ d0 d0 ❍ ❥ ❍ ■ ❅ ❅ Branch junction ❍ ❥ ❍ ❅ ■ ❅ nd location ✛ Parent branch (c) (b) (a) Fig. 6. (d) [Best viewed in color.] An illustration of how features are computed in Step 6. See text in Section III-B.6 for a discussion. Branch 3 Diameter (mm) Tree A Branch 2 Branch 3 Branch 2 Branch 4 Branch 1 17.5mm Branch 1 30 15 20 10 10 5 0 23.0mm 1 2 10.4mm 16.1mm Tree G 3 4 Branch # 0 5 GT Est. 1 2 3 Branch # 4 1 2 3 Branch # 4 1 2 3 Branch # 4 14.1mm 12.7mm Branch 0 Branch 0 18.9mm 34.5mm Length (mm) 1000 8.7mm 1000 500 0 (a) Tree A (b) Tree G 500 1 2 3 4 Branch # 0 5 C OMPUTATION TIME FOR THE ROTSE, 12 TREES Tree ID A B C D E F G H I J K L Average Stdev Segmentation (s) 15.59 19.78 17.30 14.88 20.43 16.02 22.17 16.10 19.35 17.14 19.69 20.31 18.23 2.34 Reconstruction (s) 333.72 372.14 395.68 409.14 445.29 476.22 471.04 517.11 523.08 527.36 667.07 690.50 485.70 109.07 Skeletonization, graph rep., feature measurement (s) 4.17 3.73 3.84 5.32 3.74 4.54 3.82 3.92 4.75 4.02 5.38 4.33 4.30 0.59 Total time (minutes) 5.89 6.59 6.95 7.16 7.82 8.28 8.28 8.95 9.12 9.14 11.54 11.92 8.47 1.84 general, the primary branches are captured by the system, while the secondary and higher-level branches are captured but less reliably. A major reason for this is the voxel size setting; the high-order branches have smaller diameters, and in order to facilitate their capture the voxel size would have to be decreased, which would increase runtime. Angle (degrees) TABLE II Angle (degrees) Fig. 7. [Best viewed in color.] Trees used in the accuracy evaluation along with branch identifiers and their corresponding diameters. 150 100 50 0 1 2 3 4 Branch # 5 150 100 50 0 Fig. 8. Quantitative accuracy results. GT is the ground truth and Est. are the measurements estimated by our system. D. Discussion In terms of accuracy, the methods involved in RoTSE utilize discrete divisions of the search region into voxels, and for the experiments in this paper, the minimum voxel size was set to 3mm3 . Concerning the diameter measurements, the mean squared error of 0.99mm is one-third the length of one of the voxel sides, which was a surprising result. The angle and length measurements were not as accurate, which may also be influenced by the discrete way in which those measurements were acquired. For the angle measurements, the branch junction voxel as well as one voxel on the parent and child branches were used to compute the angle, but it may be that those particular voxels were not the most (a) Full image of tree B (b) Reconstruction Fig. 9. (a) Full image of tree D (c) Curve skeleton [Best viewed in color.] Results for tree B. No ground truth available. (b) Reconstruction Fig. 10. (d) Graph representation (c) Curve skeleton (d) Graph representation [Best viewed in color.] Results for tree D. No ground truth available. (a) Detail of tree D (b) Reconstruction Fig. 11. (c) Curve skeleton [Best viewed in color.] A detail of the results for tree D. appropriate ones to use for computing the measurement. On length, the individual voxel distances are summed to produce a branch length. However, this may produce a length that is longer than the actual length because of the noisiness of the branch path (for example, notice the blue segment in the lower part of Figure 11c). On the subject of computational time, the main bottleneck for runtime is the reconstruction step. The main parameters that affect runtime for the reconstruction method we use are the number of pixels, number of voxels, and the complexity of the object. Since these are small trees, the initial voxel size for the reconstruction step has to be set relatively small at 12mm3 . Small initial voxel sizes in conjunction with a relatively large search region because of varying distances from the truck to the tree, as well as differences in tree height and width, resulted in long run times for the reconstruction step. V. C ONCLUSIONS AND F UTURE W ORK We have described RoTSE, which creates shape models of leafless trees in field settings and computes tree traits such as branch junction locations, branch diameters, branch lengths, and branch angles. The system was evaluated relative to accuracy in estimating tree traits, computation time, and robustness. Through these evaluations, we could see that RoTSE at this point is suitable for structural phenotyping, and provides an advance in that field, considering that no infield systems for structural phenotyping are currently available. In the structural phenotyping problem, computational time is not as issue, but accuracy is. The robotic pruning problem is an automation task that requires a fast runtime. Consequently, the runtimes of RoTSE must be decreased in order for the system to be adequate for that purpose. In the course of this research, we have identified some future work that would ultimately improve RoTSE. First, the operation of the system as described in this paper is offline: images are acquired in the field for all trees, and then the images are transferred to a high performance computer, where the steps are run for all trees until completion. However, in our future work we intend to make the software steps more fully incorporated into the MRU, with the eventual goal of producing tree shape models and estimates in the field. As a part of this work, we would like to speed up the reconstruction step, which is the major bottleneck in terms of runtime for the RoTSE system. Finally, in this paper, we do not deal with cycles in the construction of the graph in Section III-B.5. However, in real-life situations, cycles are often present from branches overlapping each other and crossing, to intersections of the trellis materials with the tree. Strategies to disambiguate branches segments that form cycles are part of our plans for future work. 1) Acknowledgements: A. Tabb would like to acknowledge and thank Scott Wolford for his expertise and effort in the construction of the MRU and background units. She would also like to thank Larry Crim for his expertise concerning the measurement of tree traits. R EFERENCES [1] R. Lehnert, “Automated pruning with robotics,” Good Fruit Grower, 2015. [2] D. Houle, D. R. Govindaraju, and S. Omholt, “Phenomics: the next challenge,” Nature reviews genetics, vol. 11, no. 12, pp. 855–866, 2010. [3] L. Li, Q. Zhang, and D. Huang, “A review of imaging techniques for plant phenotyping,” Sensors, vol. 14, no. 11, pp. 20 078–20 111, 2014. [4] H. Sinoquet, P. Rivet, and C. Godin, “Assessment of the threedimensional architecture of walnut trees using digitising,” Silva Fennica, vol. 31, no. 3, pp. 265–273, 1997. [5] R. Arikapudi, S. Vougioukas, and T. Saracoglu, “Orchard tree digitization for structural-geometrical modeling,” in Precision agriculture’15. Wageningen Academic Publishers, 2015, pp. 161–168. [6] S. G. Vougioukas, R. Arikapudi, and J. Munic, “A study of fruit reachability in orchard trees by linear-only motion,” IFAC-PapersOnLine, vol. 49, no. 16, pp. 277–280, 2016. [7] H. Medeiros, D. Kim, J. Sun, H. Seshadri, S. A. Akbar, N. M. Elfiky, and J. Park, “Modeling dormant fruit trees for agricultural automation,” Journal of Field Robotics, pp. n/a–n/a, 2016. [Online]. Available: http://dx.doi.org/10.1002/rob.21679 [8] Y. Livny, F. Yan, M. Olson, B. Chen, H. Zhang, and J. El-Sana, “Automatic reconstruction of tree skeletal structures from point clouds,” ACM Trans. Graph., vol. 29, no. 6, pp. 151:1–151:8, Dec. 2010. [Online]. Available: http://doi.acm.org/10.1145/1882261. 1866177 [9] W. Liu, G. Kantor, F. D. la Torre, and N. Zheng, “Image-based tree pruning,” in 2012 IEEE International Conference on Robotics and Biomimetics (ROBIO), Dec 2012, pp. 2072–2077. [10] M. Karkee, B. Adhikari, S. Amatya, and Q. Zhang, “Identification of pruning branches in tall spindle apple trees for automated pruning,” Computers and Electronics in Agriculture, vol. 103, pp. 127–135, 2014. [11] M. Karkee and B. Adhikari, “A method for three-dimensional reconstruction of apple trees for automated pruning,” Transactions of the ASABE, vol. 58, no. 3, pp. 565–574, 2015. [12] N. M. Elfiky, S. A. Akbar, J. Sun, J. Park, and A. Kak, “Automation of dormant pruning in specialty crop production: An adaptive framework for automatic reconstruction and modeling of apple trees,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2015, pp. 65–73. [13] S. A. Akbar, N. M. Elfiky, and A. Kak, “A novel framework for modeling dormant apple trees using single depth image for robotic pruning application,” in Robotics and Automation (ICRA), 2016 IEEE International Conference on. IEEE, 2016, pp. 5136–5142. [14] S. Chattopadhyay, S. A. Akbar, N. M. Elfiky, H. Medeiros, and A. Kak, “Measuring and modeling apple trees using time-of-flight data for automation of dormant pruning applications,” in 2016 IEEE Winter Conference on Applications of Computer Vision (WACV), March 2016, pp. 1–9. [15] T. Botterill, S. Paulin, R. Green, S. Williams, J. Lin, V. Saxton, S. Mills, X. Chen, and S. Corbett-Davies, “A robot system for pruning grape vines,” Journal of Field Robotics, pp. n/a–n/a, 2016. [Online]. Available: http://dx.doi.org/10.1002/rob.21680 [16] P. Cignoni, M. Callieri, M. Corsini, M. Dellepiane, F. Ganovelli, and G. Ranzuglia, “MeshLab: an Open-Source Mesh Processing Tool,” in Eurographics Italian Chapter Conference, V. Scarano, R. D. Chiara, and U. Erra, Eds. The Eurographics Association, 2008. [17] A. Tabb and K. M. AhmadYousef, “Solving the robot-world hand-eye(s) calibration problem with iterative methods,” Machine Vision and Applications, May 2017. [Online]. Available: http: //dx.doi.org/10.1007/s00138-017-0841-7 [18] A. Tabb. (2017) Data from: Solving the robot-world hand-eye(s) calibration problem with iterative methods. http://dx.doi.org/10.15482/ USDA.ADC/1340592. [19] A. Tabb and H. Medeiros, “Automatic segmentation in dynamic outdoor environments,” arXiv:1702.07611 [cs.CV], 2017. [20] A. Tabb, “Shape from silhouette probability maps: reconstruction of thin objects in the presence of silhouette extraction and calibration error,” in Computer Vision and Pattern Recognition (CVPR), 2013 IEEE Conference on, June 2013. [21] ——, “Shape from inconsistent silhouette: Reconstruction of objects in the presence of segmentation and camera calibration error,” Ph.D. dissertation, Purdue University, 2014. [22] A. Tabb and H. Medeiros, “Fast and robust curve skeletonization for real-world elongated objects,” arXiv:1702.07619 [cs.CV], 2017. [23] A. Meijster, J. B. Roerdink, and W. H. Hesselink, “A general algorithm for computing distance transforms in linear time,” in Mathematical Morphology and its applications to image and signal processing. Springer, 2002, pp. 331–340.