Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Continuously-Adaptive Hapti Rendering Jihad El-Sana1 and Amitabh Varshney2 1 Department of Computer S ien e, Ben-Gurion University, Beer-Sheva, 84105, Israel jihad s.bgu.a .il 2 Department of Computer S ien e, University of Maryland at College Park, College Park, MD 20742, USA Hapti display with for e feedba k is often ne essary in several virtual environments. To enable hapti rendering of large datasets we introdu e Continuously-Adaptive Hapti Rendering, a novel approa h to redu e the omplexity of the rendered dataset. We onstru t a ontinuous, multiresolution hierar hy of the model during the pre-pro essing and then at run time we use high-detail representation for regions around the probe pointer and oarser representation farther away. We a hieve this by using a bell-shaped lter entered at the position of the probe pointer. Using our algorithm we are able to hapti ally render one to two orders of magnitude larger datasets than otherwise possible. Our approa h is orthogonal to the previous work done in a elerating hapti rendering and thus an be used with them. Abstra t. 1 Introdu tion Hapti displays with for e and ta tile feedba k are essential to realism in virtual environments and an be used in various appli ations su h as medi ine (virtual surgery for medi al training, mole ular do king for drug design), entertainment (video games), edu ation (studying nano, ma ro, or astronomi al s ale natural and s ien e phenomena), and virtual design and prototyping (nanomanipulation, integrating hapti s into CAD systems). Humans an sense tou h in two ways: ta tile and kinestheti . Ta tile refers to the sensation aused by stimulating the skin nerves su h as by vibration, pressure, and temperature. Kinestheti refers to the sensation from motion and for es, whi h trigger the nerve re eptors in the mus les, joints, and tendons. Computer hapti s is uli to a hapti on erned with generating and rendering hapti omputer user, just as stim- omputer graphi s deals with visual stimuli. In interfa e intera tion, the user onveys a desired motor a tion by physi- ally manipulating the interfa e, whi h in turn provides ta tual sensory feedba k to the user by appropriately stimulating her/his ta tile and kinestheti systems. Figure 1 shows the basi virtual environment. As the user manipulates the generi devi e, the hapti sensory pro ess of hapti ally rendering obje ts in a probe of the hapti system keeps tra k of the position and the orientation of the probe. When a probe ollides with an obje t, the me hanisti model al ulates the rea tion for e based on the depth of the probe into the virtual obje t. Generic Probe Information Collision Detection Contact Data Human Machine Contact Servo Loop Applied Force Inverse Kinematics Fig. 1. Touch Effects Geometry Information Forward Kinematices Force Mapping Modified Force The hapti rendering pro esses Several hapti te hniques have been developed to hapti ally render 3D obje ts whi h an have either surfa e-based or volume-based representation. Hapti intera tion in virtual environment ould be Point-based or Ray-based. In pointbased hapti intera tion only the end-point of the hapti devi e, known as Hapti Interfa e Point (HIP), intera ts with the obje ts. In ray-based hapti intera tion, the generi probe of the hapti devi e is modeled as a nite ray. The ollision is dete ted between this ray and the obje t, and the orientation of the ray is used in omputing the hapti for e feedba k. In this approa h the for e re e tion is al ulated using a linear spring law F = kx. In visual rendering several te hniques are used to enhan e the intera tivity and improve the realism of the rendered obje ts. Smooth shading and texture mapping are good examples of these te hniques. Similar algorithms are used in hapti rendering to onvey the ta tual feeling of the inspe ted obje ts. Some of these approa hes have been adapted from graphi s rendering while others have been developed ex lusively for hapti rendering. 2 Previous Work In this se tion we overview some of the work done in hapti s rendering for virtual environment appli ations. Hapti rendering has been found to be parti ularly useful in mole ular do king [3℄ and nanomanipulation [19℄. Randolph et al. [13℄ have developed an approa h for point-based hapti rendering of a surfa e by using an intermediate representation. A lo al planar approximation to the surfa e is omputed at the ollision point for ea h y le of the for e loop. The rea tion for e ve tor is omputed with respe t to the tangent plane. This approa h has one major drawba k { undesirable for e dis ontinuities may appear if the generi probe of the hapti devi e is moved over large distan es before the new tangent plane is updated. An improvement to this method has been presented by Salisbury and Tarr [17℄. Basdogan et al.[2℄ have developed a ray-based rendering approa h. The generi probe of the hapti devi e is modeled as a line segment. They update the simulated generi probe of the hapti devi e (stylus) as the user manipulates the a tual one. They dete t ollisions between the simulated stylus and the virtual obje ts in three progressively nested he ks: (a) the bounding boxes of the virtual obje ts, (b) the bounding box of appropriate triangular elements, and ( ) appropriate triangular elements. They estimate the rea tion for e by using a linear spring law model. Ruspini et al. [16℄ introdu e the notion of proxy on the hapti system as a massless sphere that moves among the obje ts in the environment. They assumed that all the obsta les in the environment ould be divided into a nite set of onvex omponents. During the update pro ess, the proxy attempts to move to the goal on guration using dire t linear motion. Gregory et al. [8℄ have developed an eÆ ient system, H-Collide, for omputing onta t(s) between the probe of the for e-feedba k devi e and obje ts in the virtual environment. Their system uses spatial de omposition, a bounding volume hierar hy, and exploits frame-to-frame oheren e to a hieve a fa tor of 3 to 20 in speed improvement. Polygonal or polyhedral des riptions are often used to represent obje ts in virtual environments. The straightforward hapti rendering of these obje ts often does not onvey the desired shape for the user. Morgenbesser and Srinivasan [15℄ have developed for e shading, in whi h the for e ve tor is interpolated over the polygonal surfa es. Hapti rendering has also been su essfully pursued for volumetri datasets [1℄ and for NURBS surfa es [4℄. The sensations of tou h have been onveyed to the human ta tile system using textures generated by { for e perturbation and displa ement mapping. For e perturbation refers to the te hnique of modifying dire tion and magnitude of the for e ve tor to generate surfa e e e ts su h as roughness [14℄. In displa ement mapping the a tual geometry of the obje t is modi ed to display the surfa e details. To improve the realism of the hapti intera tion su h as the push of a button or the turn of a swit h, fri tion e e ts have been introdu ed. Fri tion an be simulated by applying stati and dynami for es in a dire tion tangential to the normal for e. 2.1 Hapti Rendering The hapti rendering pro ess involves the following three steps: { { { Initializing the hapti devi e interfa e and transferring the dataset representation from the user data bu ers to the hapti devi e drivers or API bu ers. This step may require translating the data from the user representation to mat h the hapti API representation. Collision dete tion between the elements representing virtual obje ts and the probe of the hapti devi e. Su h dete tion be omes mu h more omplex when the probe has multiple dynami ngers. Estimating the for e that the hapti devi e needs to apply to the user's hand or nger. This for e is fed to the generi probe. We would like to redu e the overhead for the above three steps. Di erent approa hes ould be used to a hieve this goal. A simple way ould be subdivide the dataset into disjoint ells (using o tree or any other spatial subdivision) during pre-pro essing. Then at run-time the ells whi h are within some threshold distan e from the probe pointer are onsidered in the ollision dete tion and for e feedba k estimation. This approa h has two drawba ks. First, the sele ted ells may eliminate part of the for e eld that a e ts the user. For example, when hapti ally rendering a surfa e as in Figure 2 the user may sense in orre t for e when using spatial subdivision. Se ond, if the user moves the probe pointer too fast for the appli ation to update the ells, the user ould per eive rough (and in orre t) for e feedba k. Another approa h to redu e the above overhead ould be to redu e the omplexity of the dataset through simpli ation. Several di erent levels of detail ould then be onstru ted o -line. At run time, an appropriate level is sele ted for ea h obje t. However, swit hing between the di erent levels of detail at run time may lead to noti eable hanges in the for e feedba k whi h is distra ting. Also, if the obje ts being studied are very large, this method will provide only one level of detail a ross the entire obje t. Selected Region Probe Cursor Surface Fig. 2. The use of spatial subdivision may result in in orre t sense of the for e eld In this paper we introdu e Continuously-Adaptive Hapti Rendering { a novel approa h to redu e the omplexity of the rendered dataset { whi h is based on the View-Dependen e Tree introdu ed by El-Sana and Varshney [6℄. We use the same o -line onstru ted tree and at run time we use a di erent poli y to determine the various levels of detail at the di erent regions of the surfa e. 2.2 View-Dependent Rendering View-dependent simpli ations using the edge- ollapse/vertex-split primitives in lude work by Xia et al. [20℄, Hoppe [10℄, Guezie et al. [9℄, and El-Sana and Varshney [6℄. View-dependent simpli ations by Luebke and Erikson [12℄, and De Floriani et al. [5℄ do not rely on the edge- ollapse primitive. Klein et al. [11℄ have developed an illumination-dependent re nement algorithm for multiresolution meshes. S hilling and Klein [18℄ have introdu ed a re nement algorithm that is texture dependent. Gieng et al. [7℄ produ e a hierar hy of triangle meshes that an be used to blend di erent levels of detail in a smooth fashion. View-dependen e tree [6℄ is a ompa t multiresolution hierar hi al datastru ture that supports view-dependent rendering. In fa t, for a given input dataset, the view-dependen e tree onstru tion often leads to a forest (set of trees) sin e not all the nodes an be merged together to form one tree. The view-dependen e trees are able to adapt to various levels of detail. Coarse details are asso iated with nodes that are lose to the top of the tree (roots) and high details are asso iated with the nodes that are lose to the bottom of the tree (leaves). The re onstru tion of a real-time adaptive mesh requires the determination of the list of verti es of this adaptive mesh and the list of triangles that onne t these verti es. Following [6℄, we refer to these lists as the list of a tive nodes and the list of a tive triangles. 3 Our Approa h We have integrated view-dependent simpli ation with hapti rendering to allow faster and more eÆ ient for e feedba k. We refer to this as as ontinuouslyadaptive hapti rendering. Similar to graphi s rendering, Continuously-adaptive hapti rendering speeds up the overall performan e of hapti rendering by redu ing the number of triangles representing the dataset. In our approa h we do not need to send the omplete surfa e to the hapti system. Instead, we send a surfa e with high details in the region lose to the generi probe pointer and oarser representation as the region gets far from the generi probe. 3.1 Lo alizing the View-Dependen e Tree The the onstru tion of view-dependen e trees results in dependen ies between the nodes of the tree. These dependen ies are used to avoid foldovers at run time by preventing the ollapse or merge of nodes before others. Therefore, these dependen ies may restri t the re nement of nodes, might have otherwise re ned to omply with the visual delity or error metri . In order to redu e su h restri tions we redu e the dependen ies between the nodes of the tree. We an redu e the dependen ies by lo alizing the tree, whi h refers to onstru ting the tree to minimize the distan e between the nodes of the tree. We de ne the radius of a subtree as the maximum distan e between the root of the subtree and any of its hildren. We are urrently using the Eu lidean distan e metri to measure the distan e. We an lo alize a view-dependen e tree by minimizing the radius of ea h subtree. Sin e we onstru t the tree bottomup, the algorithm starts by initializing ea h subtree radius to zero (ea h subtree has only one node). Ea h ollapse operation results in a merge of two subtrees. We ollapse a node to the neighbor whi h result in the minimum radius. It is important to note that our algorithm does not guarantee optimal radius for the nal tree. In pra ti e, it results in fairly a eptable small radius. 3.2 Levels of Detail When hapti ally rendering a polygonal dataset we need to dete t ollisions between the probe and the dataset and ompute the for e that the probe supplies the user at very high rates (more that 1000 Hz). The triangles lose to the probe ontribute more to the for e feedba k and have a higher probability of ollision with the probe. The triangles far from the probe have little e e t on the for e-feedba k and have a smaller probability of ollision with the probe. In our approa h we use high-detail representation for regions near the probe and oarser representation farther away. We a hieve this by using a bell-shaped lter as in Figure 3(a). In our lter, the distan e from the hapti probe pointer di tates the level of detail of ea h region. This lter ould be seen as a mapping of distan e from the probe pointer to the swit h value (swit h value is the value of the simpli ation metri at whi h two verti es had ollapsed at the tree onstru tion time). The surfa e lose to the probe should be displayed in its highest possible resolution in order to onvey the best estimation of the for e feedba k. In addition, regions far enough from the probe an not be displayed at less than the oarsest level. We were able to a hieve further speed-up by hanging the shape of our lter from bell-shaped to multiple-frustums shape. This redu es the time to ompute the swit h value of an a tive node, whi h needs to be exeuted for ea h node at ea h frame. Figure 3(b) shows the shape of the optimized leter. This hange redu es the omputation of distan e (from the probe) and ubi fun tion (whi h we use to estimate the bell-shaped lter) to nd the maximum di eren e along any of the three axes x, y, and z. We also allow the user to hange some of the lter attributes that determine the relation between the level of detail and the distan e between the probe pointer. (a) Fig. 3. (b) Ideal verse optimized lter At run time we load the view-dependen e trees and initialize the roots as the a tive verti es. Then at ea h frame we repeat the following steps. First, we query the position and the orientation of the probe. Then we s an the list of a tive verti es. For ea h vertex we ompute the distan e from the probe position, determine the mapping to the swit h value domain, and then ompare the resulting value with the swit h value stored at the node. The node splits if the omputed value is less than the swit h value and the node satis es the impli it dependen ies for split. The node merges with its sibling if the omputed value is larger than the swit h value stored at the parent of this node and the node satis es the impli it dependen ies for merge. After ea h split, we remove the node from the a tive-nodes list and insert its two hildren into the a tive-nodes list. Then we update the adja ent triangle list to mat h the hange; and insert the PAT triangles into the adja ent list of the newly inserted nodes. The merge operation is arried out in two steps, rst we remove the two merged nodes from the a tive-nodes list, and then we insert the parent node into the a tive-nodes list. Finally, we update the adja ent-triangles list by removing the PAT triangles list and merging the two merged nodes triangles (the interested reader may refer to the details in [6℄). The resulting set of a tive triangles is sent to the hapti interfa e. 4 Further Optimizations We were able to a hieve further speedups by ne-tuning spe i se tions of our implementation. For instan e, when updating the a tive-nodes list and a tivetriangles list after ea h step we repla e pointers instead of removing and inserting them. For example after split, we repla e the node with one of its hildren (the left one) and insert the se ond hild. Similarly in merge we repla e the left hild with its parent and remove the other hild. Even though the a tive lists are lists of pointer to the a tual nodes of the tree, still their allo ation and deallo ation requires more time be ause it relies on the operating system. The hapti and graphi s bu ers are updated in an in remental fashion. Sin e the hange between onse utive frames tends to be small, this results in small hanges in the hapti and graphi s bu ers. Therefore, we repla e the verti es and triangles that do not need to be rendered in the next frame with the newly added verti es and triangles. This requires very small update time that is not noti eable by the user. Sin e the graphi s rendering and the hapti rendering run at di erent frequen ies we have de ided to maintain them through di erent pro esses (whi h run on di erent pro essors for a multi-pro essor ma hine). The display runs at low update rates of about 20 Hz, while the hapti pro ess runs at higher rates of about 1000 Hz. We also use another pro ess to maintain the a tive lists and the view-dependen e tree stru ture. At 20 Hz frequen y we query the hapti probe for its position and orientation, then update the a tive lists to re e t the hange of the probe pointer. Finally, we update the graphi s and the hapti bu ers. In this s enario, the graphi s omponent is updated at 20 Hz and runs at this rate while the hapti omponent runs at 1000 Hz and is updated at only 20 Hz. To better approximate the level of detail when the user is moving the probe fast, we use the estimated motion traje tory of the probe pointer, and the distan e it has traveled sin e the previous frame to perform look-ahead estimation of the probe's likely lo ation. 5 Results We have implemented our algorithm in C++ on an SGI ONYX2 with in nite reality. For hapti rendering we have used the PHANToM hapti devi e from SensAble Te hologies with six degrees of input and three degree of output freedom. The hapti interfa e is handled through the GHOST API library (from SensAble Te hologies). This hapti devi e fails (the servo loop breaks) when it is pushed to run at less that 1000 Hz frequen y. Dataset CAD-Obj1 CAD-Obj2 Mole ule Terrain CHR OFF CHR ON Average Average Average Average Frame rate(Hz) Quality Frame rate (Hz) Quality 2K 1500 good 1500 good 8K 600 bad 1200 good 20 K | breaks 1000 good 92 K | breaks 1000 good Table 1. Results of our approa h Triangles We have ondu ted several tests on various datasets and have re eived enouraging results. Table 1 shows some of our results. It shows results of hapti rendering with (CHR ON) and without (CHR OFF) the use of ontinuouslyadpative hapti rendering. For medium size datasets the hapti devi e works for some time then fails. When the datasets be ome larger the hapti devi e fails almost immediately be ause it was not able to run at the minimum required frequen y. This failure ould be the result of failing to nish the ollision dete tion pro ess or the failure to nish the for e eld estimation pro ess. Redu ing the dataset size using our algorithm enables su essful hapti rendering of these datasets. Figure 4 shows the system on guration we used in our testing. In our system the mouse and the hapti probe pointer is used simultaneously to hange and update the viewed position of dataset. Figure 5 shows high level of detail around the probe pointer (shown as a bright sphere in the enter). Fig. 4. Our system on guration Shaded Fig. 5. Hapti 6 Wire frame rendering of terrain dataset, the yellow sphere is the hapti probe pointer Con lusions We have presented the enables hapti hapti ontinuously-adaptive hapti rendering algorithm, whi h rendering of datasets that are beyond the apability of the systems. Our approa h is based upon dynami , frame-to-frame the geometry of the surfa e and thus an be used with any of the prior s hemes, su h as bounding volume hierar hies, to a hieve superior a rendering. Hapti urrent hanges in eleration of hapti interfa es are being used in several real-life appli ations su h as mole ular do king, nanomanipulation, virtual design and prototyping, virtual surgery, and medi al training. We anti ipate that our work highlighted in this paper will a hieve a elerated hapti s rendering for all of these appli ations. A knowledgements This work has been supported in part by the NSF grants: DMI-9800690, ACR9812572, and a DURIP award N00014970362. Jihad El-Sana has been supported in part by the Fulbright/Israeli Arab S holarship Program and the Cata osinos Fellowship for Ex ellen e in Computer S ien e. We would like to thank the reviewers for their insightful omments whi h led to several improvements in the presentation of this paper. We would also like to thank our olleagues at the Center for Visual Computing at Stony Brook for their en ouragement and suggestions related to this paper. Referen es 1. R. S. Avila and L. M. Sobierajski. sualization. A hapti intera tion method for volume vi- In Pro eedings, IEEE Visualization, pages 197{204, Los Alamitos, O tober 27{November 1 1996. IEEE. 2. C. Basdogan, C. Ho, and M. Srinivasan. A ray-based hapti rendering te hnique for displaying shape and texure of 3-d obje ts in virtual environment. Dynami Systems and Control Division, November 1997. In ASME 3. F. P. Brooks Jr., M. Ouh-Young, J. J. Batter, and P. J. Kilpatri k. Proje t GROPE | hapti displays for s ienti visualization. In Computer Graphi s (SIGGRAPH '90 Pro eedings), volume 24(4), pages 177{185, August 1990. 4. F. Da hille IX, H. Qin, A. Kaufman, and J. El-Sana. Hapti s ulpting of dynami surfa es ( olor plate S. 227). In Stephen N. Spen er, editor, Pro eedings of the Conferen e on the 1999 Symposium on intera tive 3D Graphi s, pages 103{110, New York, April 26{28 1999. ACM Press. 5. L. De Floriani, P. Magillo, and E. Puppo. EÆ ient implementation of multitriangulation. In H. Rushmeier D. Elbert and H. Hagen, editors, Pro eedings Visualization '98, pages 43{50, O tober 1998. 6. J. El-Sana and A. Varshney. Generalized view-dependent simpli ation. In Computer Graphi s Forum, volume 18, pages C83{C94. Eurographi s Asso iation and Bla kwell Publishers Ltd 1999, 1999. 7. T. Gieng, B. Hamann, K. Joy, G. S hussman, and I. Trotts. Constru ting hierar hies for triangle meshes. IEEE Transa tions on Visualization and Computer Graphi s, 4(2):145{161, 1998. 8. A. Gregory, M. Lin, S. Gotts halk, and R. Taylor. H-COLLIDE: A framework for fast and a urate ollision dete tion for hapti intera tion. Te hni al Report TR98032, Department of Computer S ien e, University of North Carolina - Chapel Hill, November 03 1998. Tue, 3 Nov 1998 17:27:33 GMT. 9. A. Guezie , G. Taubin, B. Horn, and F. Lazarus. A framework for streaming geometry in VRML. IEEE CG&A, 19(2):68{78, 1999. 10. H. Hoppe. View-dependent re nement of progressive meshes. In Pro eedings of SIGGRAPH '97 (Los Angeles, CA), pages 189 { 197. ACM Press, August 1997. 11. R. Klein, A. S hilling, and W. Straer. Illumination dependent re nement of multiresolution meshes. In Computer Graphi s Intl, pages 680{687, June 1998. 12. D. Luebke and C. Erikson. View-dependent simpli ation of arbitrary polygonal environments. In Pro eedings of SIGGRAPH '97 (Los Angeles, CA), pages 198 { 208. ACM SIGGRAPH, ACM Press, August 1997. 13. W. Mark, S. Randolph, M. Fin h, J. Van Verth, and R. Taylor II. Adding for e feedba k to graphi s systems: Issues and solutions. In Pro eedings of SIGGRAPH '96 (New Orleans, LA, August 4{9, 1996), pages 447 { 452. ACM Press, 1996. 14. M. Minsky, M. Ouh-young, O. Steele, F. P. Brooks, Jr., and M. Behensky. Feeling and seeing: Issues in for e display. In 1990 Symposium on Intera tive 3D Graphi s, pages 235{243, Mar h 1990. 15. H. Morgenbesser and M. Srinivasan. For e shading for hapti shape pre eption. In ASME Dynami Systems and Control Division, volume 58, pages 407 { 412, 1996. 16. D. Ruspini., K. Kolarov, and O. Khatib. The hapti display of omplex graphi al environment. In Pro eedings of SIGGRAPH '97 (Los Angeles, CA), pages 345 { 352. ACM SIGGRAPH, ACM Press, August 1997. 17. J. Salisbury and C. Tarr. Hapti rendering of surfa e de ned by impli it fun tions. In ASME Dynami Systems and Control Division, November 1997. 18. A. S hilling and R. Klein. Graphi s in/for digital libraries | rendering of multiresolution models with texture. Computers and Graphi s, 22(6):667{679, 1998. 19. R. Taylor, W. Robinett, V. Chi, F. Brooks, Jr., W. Wright, R. Williams, and E. Snyder. The nanomanipulator: A virtual-reality interfa e for a s anning tunnelling mi ros ope. In Pro eedings, SIGGRAPH 93, pages 127{134, 1993. 20. J. Xia, J. El-Sana, and A. Varshney. Adaptive real-time level-of-detail-based rendering for polygonal models. IEEE Transa tions on Visualization and Computer Graphi s, pages 171 { 183, June 1997.