Interdisciplinary research spanning computer science, mathematics, and Data Science applied to science and engineering domains. Most recently this has been focused on data visualization for big and high dimensional data, using tools from statistical/machine learning and using web technology for exploration. Recent domains have included remote sensing, climate science, medical imaging and computer vision. For more information see http://www-cs.ccny.cuny.edu/~grossberg Supervisors: Raul Bott
Conventional vision systems and algorithms assume the camera to have a single viewpoint. However,... more Conventional vision systems and algorithms assume the camera to have a single viewpoint. However, sensors need not always maintain a single viewpoint. For instance, an incorrectly aligned system could cause non-single viewpoints. Also, systems could be designed to specifically deviate from a single viewpoint to trade-off image characteristics such as resolution and field of view. In these cases, a locus of viewpoints is formed, called a caustic. In this paper, we present an in-depth analysis of the viewpoint loci for catadioptric cameras with conic reflectors. Properties of these viewpoint loci with regard to field of view, resolution and other geometric properties are presented. In addition, we present a simple technique to calibrate such non-single viewpoint catadioptric cameras and estimate their viewpoint loci (caustics) from known camera motion.
Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004., 2004
We present a method for controlling the appearance of an arbitrary 3D object using a projector an... more We present a method for controlling the appearance of an arbitrary 3D object using a projector and a camera. Our goal is to make one object look like another by projecting a carefully determined compensation image onto the object. The determination of the appropriate compensation image requires accounting for spatial variation in the object's reflectance, the effects of environmental lighting, and the spectral responses, spatially varying fall-offs, and non-linear responses in the projectorcamera system. Addressing each of these effects, we present a compensation method which calls for the estimation of only a small number of parameters, as part of a novel off-line radiometric calibration. This calibration is accomplished by projecting and acquiring a minimal set of 6 images, irrespective of the object. Results of the calibration are then used on-line to compensate each input image prior to projection. Several experimental results are shown that demonstrate the ability of this method to control the appearance of everyday objects. Our method has direct applications in several areas including smart environments, product design and presentation, adaptive camouflages, interactive education and entertainment.
Proceedings IEEE Conference on Computer Vision and Pattern Recognition. CVPR 2000 (Cat. No.PR00662), 2000
Histograms are used to analyze and index images. They have been found experimentally to have low ... more Histograms are used to analyze and index images. They have been found experimentally to have low sensitivity to certain types of image morphisms, for example, viewpoint changes and object deformations. The precise effect of these image morphisms on the histogram, however, has not been studied. In this work we derive the complete class of local transformations that preserve or scale the magnitude of the histogram of all images. We also derive a more general class of local transformations that preserve the histogram relative to a particular image. To achieve this, the transformations are represented as solutions to families of vector fields acting on the image. The local effect of fixed points of the fields on the histograms is also analyzed. The analytical results are verified with several examples. We also discuss several applications and the significance of these transformations for histogram indexing.
Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001, 2001
... computed, stored, and matched efficiently. In figure 1 we show an example of a mul-tiresoluti... more ... computed, stored, and matched efficiently. In figure 1 we show an example of a mul-tiresolution histogram. The first row shows the image pyra-mid and the second row the multiresolution histogram. In ad-dition to the initial ...
Proceedings of the 10th IEEE International Conference on Information Technology and Applications in Biomedicine, 2010
... Yu-Chi Hu, Michael D. Grossberg and Gig S. Mageras ... boundary terms (edge costs in graph) i... more ... Yu-Chi Hu, Michael D. Grossberg and Gig S. Mageras ... boundary terms (edge costs in graph) in our energy function for graph cut minimization is reduced to 40% to 50% of original number of nodes in ROL We measure the CPU time in one liver case on a dual-Xeon workstation. ...
a) acquisition (b) clustering (c) refinement Figure 1. Segmentation pipeline. (a) In this simulat... more a) acquisition (b) clustering (c) refinement Figure 1. Segmentation pipeline. (a) In this simulated LIDAR setup, the frustum represents a scanner projecting a beam onto a 3D model. The beam strikes the nearest surface and measures the distance, rendered here in false color. (b) Similarity based on local plane fitting drives a hierarchical clustering process. (c) Planar components are refined and merged using a variant of the ÂŁ -means algorithm.
The performances of many image analysis tasks depend on the image resolution at which they are ap... more The performances of many image analysis tasks depend on the image resolution at which they are applied. Traditionally, resolution selection methods rely on spatial derivatives of image intensities. Differential measurements, however, are sensitive to noise and are local. They cannot characterize patterns, such as textures, which are defined over extensive image regions. In this work, we present a novel tool for resolution selection that considers sufficiently large image regions and is robust to noise. It is based on the generalized entropies of the histograms of an image at multiple resolutions. We first examine, in general, the variation of histogram entropies with image resolution. Then, we examine the sensitivity of this variation for shapes and textures in an image. Finally, we discuss the significance of resolutions of maximum histogram entropy. It is shown that computing features at these resolutions increases the discriminability between images. It is also shown that maximum histogram entropy values can be used to improve optical flow estimates for block based algorithms in image sequences with a changing zoom factor.
Brightness values of pixels in an image are related to image irradiance by a non-linear function,... more Brightness values of pixels in an image are related to image irradiance by a non-linear function, called the radiometric response function. Recovery of this function is important since many algorithms in computer vision and image processing use image irradiance. Several investigators have described methods for recovery of the radiometric response, without using charts, from multiple exposures of the same scene. All these recovery methods are based solely on the correspondence of gray-levels in one exposure to gray-levels in another exposure. This correspondence can be described by a function we call the brightness transfer function. We show that brightness transfer functions, and thus images themselves, do not uniquely determine the radiometric response function, nor the ratios of exposure between the images. We completely determine the ambiguity associated with the recovery of the response function and the exposure ratios. We show that all previous methods break these ambiguities only by making assumptions on the form of the response function. While iterative schemes which may not converge were used previously to find the exposure ratio, we show when it can be recovered directly from the brightness transfer function. We present a novel method to recover the brightness transfer function between images from only their brightness histograms. This allows us to determine the brightness transfer function between images of different scenes whenever the change in the distribution of scene radiances is small enough. We show an example of recovery of the response function from an image sequence with scene motion by constraining the form of the response function to break the ambiguities. We are ignoring spatially varying linear factors, for example, due to the finite aperture. We will assume that the response function is normalized both in domain (irradiance) and range (brightness).
Conference proceedings : ... Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual Conference, 2008
Planning radiotherapy and surgical procedures usually require onerous manual segmentation of anat... more Planning radiotherapy and surgical procedures usually require onerous manual segmentation of anatomical structures from medical images. In this paper we present a semi-automatic and accurate segmentation method to dramatically reduce the time and effort required of expert users. This is accomplished by giving a user an intuitive graphical interface to indicate samples of target and non-target tissue by loosely drawing a few brush strokes on the image. We use these brush strokes to provide the statistical input for a Conditional Random Field (CRF) based segmentation. Since we extract purely statistical information from the user input, we eliminate the need of assumptions on boundary contrast previously used by many other methods, A new feature of our method is that the statistics on one image can be reused on related images without registration. To demonstrate this, we show that boundary statistics provided on a few 2D slices of volumetric medical data, can be propagated through the ...
2007 IEEE 11th International Conference on Computer Vision, 2007
Many vision tasks such as scene segmentation, or the recognition of materials within a scene, bec... more Many vision tasks such as scene segmentation, or the recognition of materials within a scene, become considerably easier when it is possible to measure the spectral reflectance of scene surfaces. In this paper, we present an efficient and robust approach for recovering spectral reflectance in a scene that combines the advantages of using multiple spectral sources and a multispectral camera. We have implemented a system based on this approach using a cluster of light sources with different spectra to illuminate the scene and a conventional RGB camera to acquire images. Rather than sequentially activating the sources, we have developed a novel technique to determine the optimal multiplexing sequence of spectral sources so as to minimize the number of acquired images. We use our recovered spectral measurements to recover the continuous spectral reflectance for each scene point by using a linear model for spectral reflectance. Our imaging system can produce multispectral videos of scenes at 30fps. We demonstrate the effectiveness of our system through extensive evaluation. As a demonstration, we present the results of applying data recovered by our system to material segmentation and spectral relighting.
We present fast methods for separating the direct and global illumination components of a scene m... more We present fast methods for separating the direct and global illumination components of a scene measured by a camera and illuminated by a light source. In theory, the separation can be done with just two images taken with a high frequency binary illumination pattern and its complement. In practice, a larger number of images are used to overcome the optical and resolution limitations of the camera and the source. The approach does not require the material properties of objects and media in the scene to be known. However, we require ...
An imaging model provides a mathematical description of correspondence between points in a scene ... more An imaging model provides a mathematical description of correspondence between points in a scene and in an image. The dominant imaging model, perspective projection, has long been used to describe traditional cameras as well as the human eye. We propose an imaging model which is flexible enough to represent an arbitrary imaging system. For example using this model we can describe systems using fisheye lenses or compound insect eyes, which violate the assumptions of perspective projection. By relaxing the requirements of perspective projection, we give imaging system designers greater freedom to explore systems which meet other requirements such as compact size and wide field of view. We formulate our model by noting that all imaging systems perform a mapping from incoming scene rays to photosensitive elements on the image detector. This mapping can be conveniently described using a set of virtual sensing elements called raxels. Raxels include geometric, radiometric and optical properties. We present a novel ray based calibration method that uses structured light patterns to extract the raxel parameters of an arbitrary imaging system. Experimental results for perspective as well as non-perspective imaging systems are included.
Conventional vision systems and algorithms assume the camera to have a single viewpoint. However,... more Conventional vision systems and algorithms assume the camera to have a single viewpoint. However, sensors need not always maintain a single viewpoint. For instance, an incorrectly aligned system could cause non-single viewpoints. Also, systems could be designed to specifically deviate from a single viewpoint to trade-off image characteristics such as resolution and field of view. In these cases, a locus of viewpoints is formed, called a caustic. In this paper, we present an in-depth analysis of the viewpoint loci for catadioptric cameras with conic reflectors. Properties of these viewpoint loci with regard to field of view, resolution and other geometric properties are presented. In addition, we present a simple technique to calibrate such non-single viewpoint catadioptric cameras and estimate their viewpoint loci (caustics) from known camera motion.
Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004., 2004
We present a method for controlling the appearance of an arbitrary 3D object using a projector an... more We present a method for controlling the appearance of an arbitrary 3D object using a projector and a camera. Our goal is to make one object look like another by projecting a carefully determined compensation image onto the object. The determination of the appropriate compensation image requires accounting for spatial variation in the object's reflectance, the effects of environmental lighting, and the spectral responses, spatially varying fall-offs, and non-linear responses in the projectorcamera system. Addressing each of these effects, we present a compensation method which calls for the estimation of only a small number of parameters, as part of a novel off-line radiometric calibration. This calibration is accomplished by projecting and acquiring a minimal set of 6 images, irrespective of the object. Results of the calibration are then used on-line to compensate each input image prior to projection. Several experimental results are shown that demonstrate the ability of this method to control the appearance of everyday objects. Our method has direct applications in several areas including smart environments, product design and presentation, adaptive camouflages, interactive education and entertainment.
Proceedings IEEE Conference on Computer Vision and Pattern Recognition. CVPR 2000 (Cat. No.PR00662), 2000
Histograms are used to analyze and index images. They have been found experimentally to have low ... more Histograms are used to analyze and index images. They have been found experimentally to have low sensitivity to certain types of image morphisms, for example, viewpoint changes and object deformations. The precise effect of these image morphisms on the histogram, however, has not been studied. In this work we derive the complete class of local transformations that preserve or scale the magnitude of the histogram of all images. We also derive a more general class of local transformations that preserve the histogram relative to a particular image. To achieve this, the transformations are represented as solutions to families of vector fields acting on the image. The local effect of fixed points of the fields on the histograms is also analyzed. The analytical results are verified with several examples. We also discuss several applications and the significance of these transformations for histogram indexing.
Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001, 2001
... computed, stored, and matched efficiently. In figure 1 we show an example of a mul-tiresoluti... more ... computed, stored, and matched efficiently. In figure 1 we show an example of a mul-tiresolution histogram. The first row shows the image pyra-mid and the second row the multiresolution histogram. In ad-dition to the initial ...
Proceedings of the 10th IEEE International Conference on Information Technology and Applications in Biomedicine, 2010
... Yu-Chi Hu, Michael D. Grossberg and Gig S. Mageras ... boundary terms (edge costs in graph) i... more ... Yu-Chi Hu, Michael D. Grossberg and Gig S. Mageras ... boundary terms (edge costs in graph) in our energy function for graph cut minimization is reduced to 40% to 50% of original number of nodes in ROL We measure the CPU time in one liver case on a dual-Xeon workstation. ...
a) acquisition (b) clustering (c) refinement Figure 1. Segmentation pipeline. (a) In this simulat... more a) acquisition (b) clustering (c) refinement Figure 1. Segmentation pipeline. (a) In this simulated LIDAR setup, the frustum represents a scanner projecting a beam onto a 3D model. The beam strikes the nearest surface and measures the distance, rendered here in false color. (b) Similarity based on local plane fitting drives a hierarchical clustering process. (c) Planar components are refined and merged using a variant of the ÂŁ -means algorithm.
The performances of many image analysis tasks depend on the image resolution at which they are ap... more The performances of many image analysis tasks depend on the image resolution at which they are applied. Traditionally, resolution selection methods rely on spatial derivatives of image intensities. Differential measurements, however, are sensitive to noise and are local. They cannot characterize patterns, such as textures, which are defined over extensive image regions. In this work, we present a novel tool for resolution selection that considers sufficiently large image regions and is robust to noise. It is based on the generalized entropies of the histograms of an image at multiple resolutions. We first examine, in general, the variation of histogram entropies with image resolution. Then, we examine the sensitivity of this variation for shapes and textures in an image. Finally, we discuss the significance of resolutions of maximum histogram entropy. It is shown that computing features at these resolutions increases the discriminability between images. It is also shown that maximum histogram entropy values can be used to improve optical flow estimates for block based algorithms in image sequences with a changing zoom factor.
Brightness values of pixels in an image are related to image irradiance by a non-linear function,... more Brightness values of pixels in an image are related to image irradiance by a non-linear function, called the radiometric response function. Recovery of this function is important since many algorithms in computer vision and image processing use image irradiance. Several investigators have described methods for recovery of the radiometric response, without using charts, from multiple exposures of the same scene. All these recovery methods are based solely on the correspondence of gray-levels in one exposure to gray-levels in another exposure. This correspondence can be described by a function we call the brightness transfer function. We show that brightness transfer functions, and thus images themselves, do not uniquely determine the radiometric response function, nor the ratios of exposure between the images. We completely determine the ambiguity associated with the recovery of the response function and the exposure ratios. We show that all previous methods break these ambiguities only by making assumptions on the form of the response function. While iterative schemes which may not converge were used previously to find the exposure ratio, we show when it can be recovered directly from the brightness transfer function. We present a novel method to recover the brightness transfer function between images from only their brightness histograms. This allows us to determine the brightness transfer function between images of different scenes whenever the change in the distribution of scene radiances is small enough. We show an example of recovery of the response function from an image sequence with scene motion by constraining the form of the response function to break the ambiguities. We are ignoring spatially varying linear factors, for example, due to the finite aperture. We will assume that the response function is normalized both in domain (irradiance) and range (brightness).
Conference proceedings : ... Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual Conference, 2008
Planning radiotherapy and surgical procedures usually require onerous manual segmentation of anat... more Planning radiotherapy and surgical procedures usually require onerous manual segmentation of anatomical structures from medical images. In this paper we present a semi-automatic and accurate segmentation method to dramatically reduce the time and effort required of expert users. This is accomplished by giving a user an intuitive graphical interface to indicate samples of target and non-target tissue by loosely drawing a few brush strokes on the image. We use these brush strokes to provide the statistical input for a Conditional Random Field (CRF) based segmentation. Since we extract purely statistical information from the user input, we eliminate the need of assumptions on boundary contrast previously used by many other methods, A new feature of our method is that the statistics on one image can be reused on related images without registration. To demonstrate this, we show that boundary statistics provided on a few 2D slices of volumetric medical data, can be propagated through the ...
2007 IEEE 11th International Conference on Computer Vision, 2007
Many vision tasks such as scene segmentation, or the recognition of materials within a scene, bec... more Many vision tasks such as scene segmentation, or the recognition of materials within a scene, become considerably easier when it is possible to measure the spectral reflectance of scene surfaces. In this paper, we present an efficient and robust approach for recovering spectral reflectance in a scene that combines the advantages of using multiple spectral sources and a multispectral camera. We have implemented a system based on this approach using a cluster of light sources with different spectra to illuminate the scene and a conventional RGB camera to acquire images. Rather than sequentially activating the sources, we have developed a novel technique to determine the optimal multiplexing sequence of spectral sources so as to minimize the number of acquired images. We use our recovered spectral measurements to recover the continuous spectral reflectance for each scene point by using a linear model for spectral reflectance. Our imaging system can produce multispectral videos of scenes at 30fps. We demonstrate the effectiveness of our system through extensive evaluation. As a demonstration, we present the results of applying data recovered by our system to material segmentation and spectral relighting.
We present fast methods for separating the direct and global illumination components of a scene m... more We present fast methods for separating the direct and global illumination components of a scene measured by a camera and illuminated by a light source. In theory, the separation can be done with just two images taken with a high frequency binary illumination pattern and its complement. In practice, a larger number of images are used to overcome the optical and resolution limitations of the camera and the source. The approach does not require the material properties of objects and media in the scene to be known. However, we require ...
An imaging model provides a mathematical description of correspondence between points in a scene ... more An imaging model provides a mathematical description of correspondence between points in a scene and in an image. The dominant imaging model, perspective projection, has long been used to describe traditional cameras as well as the human eye. We propose an imaging model which is flexible enough to represent an arbitrary imaging system. For example using this model we can describe systems using fisheye lenses or compound insect eyes, which violate the assumptions of perspective projection. By relaxing the requirements of perspective projection, we give imaging system designers greater freedom to explore systems which meet other requirements such as compact size and wide field of view. We formulate our model by noting that all imaging systems perform a mapping from incoming scene rays to photosensitive elements on the image detector. This mapping can be conveniently described using a set of virtual sensing elements called raxels. Raxels include geometric, radiometric and optical properties. We present a novel ray based calibration method that uses structured light patterns to extract the raxel parameters of an arbitrary imaging system. Experimental results for perspective as well as non-perspective imaging systems are included.
Uploads
Papers by Michael D Grossberg