Most of the known optical range sensors require a large amount of two-dimensional raw data from which the three-dimensional (3D) data are decoded and so are associated with considerable cost. The cost arises from expensive hardware as well as from the time necessary to acquire the images. We will address the question of how one can acquire maximum shape information with a minimum amount of image raw data, in terms of information theory. It is shown that one can greatly reduce the amount of raw data needed by proper optical redundancy reduction. Through these considerations, a 3D sensor is introduced, which needs only a single color (red-green-blue) raw image and still delivers data with only approximately 2-microm longitudinal measurement uncertainty.