Volume visualization software usually has to deal with datasets that are larger than the GPUs may... more Volume visualization software usually has to deal with datasets that are larger than the GPUs may hold. This is especially true in one of the most popular application scenarios: medical visualization. In this paper we explore the quality of different downsampling methods and present a new approach that produces smooth lower-resolution representations, yet still preserves small features that are prone to disappear with other approaches.
The study of the colonic volume is a procedure with strong relevance to gastroenterologists. Depe... more The study of the colonic volume is a procedure with strong relevance to gastroenterologists. Depending on the clinical protocols, the volume analysis has to be performed on MRI of the unprepared colon without contrast administration. In such circumstances, existing measurement procedures are cumbersome and time-consuming for the specialists. The algorithm presented in this paper permits a quasi-automatic segmentation of the unprepared colon on T2-weighted MRI scans. The segmentation algorithm is organized as a three-stage pipeline. In the first stage, a custom tubularity filter is run to detect colon candidate areas. The specialists provide a list of points along the colon trajectory, which are combined with tubularity information to calculate an estimation of the colon medial path. In the second stage, we delimit the region of interest by applying custom segmentation algorithms to detect colon neighboring regions and the fat capsule containing abdominal organs. Finally, within the reduced search space, segmentation is performed via 3D graph-cuts in a three-stage multigrid approach. Our algorithm was tested on MRI abdominal scans, including different acquisition resolutions, and its results were compared to the colon ground truth segmentations provided by the specialists. The experiments proved the accuracy, efficiency, and usability of the algorithm, while the variability of the scan resolutions contributed to demonstrate the computational scalability of the multigrid architecture. The system is fully applicable to the colon measurement clinical routine, being a substantial step towards a fully automated segmentation.
Most visibility culling algorithms require convexity of occluders. Occluder synthesis algorithms ... more Most visibility culling algorithms require convexity of occluders. Occluder synthesis algorithms attempt to construct large convex occluders inside bulky non-convex sets. Occluder fusion algorithms generate convex occluders that are contained in the umbra cast by a group of objects given an area light. In this paper we prove that convexity requirements can be shifted from the occluders to their umbra with no loss of efficiency, and use this property to show how some special non-planar, non-convex closed polylines that we call "hoops" can be used to compute occlusion efficiently for objects that have no large interior convex sets and were thus rejected by previous approaches.
We describe a new approach to modelling pearlescent paints based on decomposing paint layers into... more We describe a new approach to modelling pearlescent paints based on decomposing paint layers into stacks of imaginary thin sublayers. The sublayers are chosen so thin that multiple scattering can be considered across different sublayers, while it can be neglected within each of the sublayers. Based on this assumption, an efficient recursive procedure of assembling the layers is developed, which enables to compute the paint BRDF at interactive speeds. Since the proposed paint model connects fundamental optical properties of multi-layer pearlescent and metallic paints with their microscopic structure, interactive prediction of the paint appearance based on its composition becomes possible.
The development of eective user interfaces for virtual reality environments must face dierent cha... more The development of eective user interfaces for virtual reality environments must face dierent challenges. Not only must one evolve new paradigms that are adequate in those environments, but they must also be minimally invasive (not to spoil the immersion il- lusion), and they must rely on an adequate support from the underlying models and model-handling functions, so that the maximum complexity of usable scenes is not seri- ously limited. This project aims at pushing further the state of the art, developing new techniques at the modelling level (to allow handling models like the most complex currently available, which are too large to fit in core memory for a reasonably sized workstation). It also aims at perfecting the state of the art concerning the interfaces themselves, developing specific techniques adapted to the interaction in those media. Moreover, the results will be tested developing specific prototypes to verify the usability of the proposed techniques from the point of v...
Given a client-server communication network, with workstations equipped with 3D texture hardware,... more Given a client-server communication network, with workstations equipped with 3D texture hardware, we propose a technique that guarantees the optimal use of the client texture hardware. We consider the best representation of a data model that has to be rendered on the client side as the one that requires the minimal texture space while preserving image quality. Taking into account this consideration the basis of the proposed technique is the selection of the best multiresolution representation from a volume hierarchy, maintained by the server. The key points of our proposal are: (i) the hierarchical data structure used, either by the server and the client, to maintain the data; (ii) the data management process applied by the server to satisfy client requirements.(iii) the possibility of the client to predict part of one transmission by analyzing the previous one. Such a capability allows the client to perform some computations in advance and, therefore, reduces frame rates.
Volume visualization software usually has to deal with datasets that are larger than the GPUs may... more Volume visualization software usually has to deal with datasets that are larger than the GPUs may hold. This is especially true in one of the most popular application scenarios: medical visualization. In this paper we explore the quality of different downsampling methods and present a new approach that produces smooth lower-resolution representations, yet still preserves small features that are prone to disappear with other approaches.
The study of the colonic volume is a procedure with strong relevance to gastroenterologists. Depe... more The study of the colonic volume is a procedure with strong relevance to gastroenterologists. Depending on the clinical protocols, the volume analysis has to be performed on MRI of the unprepared colon without contrast administration. In such circumstances, existing measurement procedures are cumbersome and time-consuming for the specialists. The algorithm presented in this paper permits a quasi-automatic segmentation of the unprepared colon on T2-weighted MRI scans. The segmentation algorithm is organized as a three-stage pipeline. In the first stage, a custom tubularity filter is run to detect colon candidate areas. The specialists provide a list of points along the colon trajectory, which are combined with tubularity information to calculate an estimation of the colon medial path. In the second stage, we delimit the region of interest by applying custom segmentation algorithms to detect colon neighboring regions and the fat capsule containing abdominal organs. Finally, within the reduced search space, segmentation is performed via 3D graph-cuts in a three-stage multigrid approach. Our algorithm was tested on MRI abdominal scans, including different acquisition resolutions, and its results were compared to the colon ground truth segmentations provided by the specialists. The experiments proved the accuracy, efficiency, and usability of the algorithm, while the variability of the scan resolutions contributed to demonstrate the computational scalability of the multigrid architecture. The system is fully applicable to the colon measurement clinical routine, being a substantial step towards a fully automated segmentation.
Most visibility culling algorithms require convexity of occluders. Occluder synthesis algorithms ... more Most visibility culling algorithms require convexity of occluders. Occluder synthesis algorithms attempt to construct large convex occluders inside bulky non-convex sets. Occluder fusion algorithms generate convex occluders that are contained in the umbra cast by a group of objects given an area light. In this paper we prove that convexity requirements can be shifted from the occluders to their umbra with no loss of efficiency, and use this property to show how some special non-planar, non-convex closed polylines that we call "hoops" can be used to compute occlusion efficiently for objects that have no large interior convex sets and were thus rejected by previous approaches.
We describe a new approach to modelling pearlescent paints based on decomposing paint layers into... more We describe a new approach to modelling pearlescent paints based on decomposing paint layers into stacks of imaginary thin sublayers. The sublayers are chosen so thin that multiple scattering can be considered across different sublayers, while it can be neglected within each of the sublayers. Based on this assumption, an efficient recursive procedure of assembling the layers is developed, which enables to compute the paint BRDF at interactive speeds. Since the proposed paint model connects fundamental optical properties of multi-layer pearlescent and metallic paints with their microscopic structure, interactive prediction of the paint appearance based on its composition becomes possible.
The development of eective user interfaces for virtual reality environments must face dierent cha... more The development of eective user interfaces for virtual reality environments must face dierent challenges. Not only must one evolve new paradigms that are adequate in those environments, but they must also be minimally invasive (not to spoil the immersion il- lusion), and they must rely on an adequate support from the underlying models and model-handling functions, so that the maximum complexity of usable scenes is not seri- ously limited. This project aims at pushing further the state of the art, developing new techniques at the modelling level (to allow handling models like the most complex currently available, which are too large to fit in core memory for a reasonably sized workstation). It also aims at perfecting the state of the art concerning the interfaces themselves, developing specific techniques adapted to the interaction in those media. Moreover, the results will be tested developing specific prototypes to verify the usability of the proposed techniques from the point of v...
Given a client-server communication network, with workstations equipped with 3D texture hardware,... more Given a client-server communication network, with workstations equipped with 3D texture hardware, we propose a technique that guarantees the optimal use of the client texture hardware. We consider the best representation of a data model that has to be rendered on the client side as the one that requires the minimal texture space while preserving image quality. Taking into account this consideration the basis of the proposed technique is the selection of the best multiresolution representation from a volume hierarchy, maintained by the server. The key points of our proposal are: (i) the hierarchical data structure used, either by the server and the client, to maintain the data; (ii) the data management process applied by the server to satisfy client requirements.(iii) the possibility of the client to predict part of one transmission by analyzing the previous one. Such a capability allows the client to perform some computations in advance and, therefore, reduces frame rates.
Uploads
Papers by Isabel Navazo