A user-centric method for fast, interactive, robust, and high-quality shadow removal is presented. Our algorithm can perform detection and removal in a range of difficult cases, such as highly textured and colored shadows. To perform detection, an on-the-fly learning approach is adopted guided by two rough user inputs for the pixels of the shadow and the lit area. After detection, shadow removal is performed by registering the penumbra to a normalized frame, which allows us efficient estimation of nonuniform shadow illumination changes, resulting in accurate and robust removal. Another major contribution of this work is the first validated and multiscene category ground truth for shadow removal algorithms. This data set containing 186 images eliminates inconsistencies between shadow and shadow-free images and provides a range of different shadow types such as soft, textured, colored, and broken shadow. Using this data, the most thorough comparison of state-of-the-art shadow removal methods to date is performed, showing our proposed algorithm to outperform the state of the art across several measures and shadow categories. To complement our data set, an online shadow removal benchmark website is also presented to encourage future open comparisons in this challenging field of research.
You do not have subscription access to this journal. Cited by links are available to subscribers only. You may subscribe either as an Optica member, or as an authorized user of your institution.
You do not have subscription access to this journal. Figure files are available to subscribers only. You may subscribe either as an Optica member, or as an authorized user of your institution.
You do not have subscription access to this journal. Article tables are available to subscribers only. You may subscribe either as an Optica member, or as an authorized user of your institution.
You do not have subscription access to this journal. Equations are available to subscribers only. You may subscribe either as an Optica member, or as an authorized user of your institution.
two types scribbles for sampling shadow and lit intensities
Yes
Yes
Yes
“Illumination preserving” refers to the ability to preserve the original illumination in the lit area. “Texture preserving” refers to the preservation of the correct surface texture under the penumbra after removal. “Color correction” refers to the ability to correct color artifacts caused by image post-processing after removal.
The left and right sides of the table show the error scores where all pixels in the image are used, and just shadow area pixels, respectively. For each score of each attribute, the images with other predominant attributes (strong) are not used. Hence, test cases have a strong single bias towards one of the attributes. “Other” refers to a set of shadow cases showing no markedly predominant attributes. “Mean” refers to the average score for each category. Standard derivations are shown in brackets. In our ordering, the average error is compared before comparing the standard derivation. Method [2] is trained using a large shadow detection data set from [19]. The user input for method [16] is a combination of the simple input for our method and some additional strokes for accommodating the sensitive shadow detection of [16]. The best scores are made bold.
two types scribbles for sampling shadow and lit intensities
Yes
Yes
Yes
“Illumination preserving” refers to the ability to preserve the original illumination in the lit area. “Texture preserving” refers to the preservation of the correct surface texture under the penumbra after removal. “Color correction” refers to the ability to correct color artifacts caused by image post-processing after removal.
The left and right sides of the table show the error scores where all pixels in the image are used, and just shadow area pixels, respectively. For each score of each attribute, the images with other predominant attributes (strong) are not used. Hence, test cases have a strong single bias towards one of the attributes. “Other” refers to a set of shadow cases showing no markedly predominant attributes. “Mean” refers to the average score for each category. Standard derivations are shown in brackets. In our ordering, the average error is compared before comparing the standard derivation. Method [2] is trained using a large shadow detection data set from [19]. The user input for method [16] is a combination of the simple input for our method and some additional strokes for accommodating the sensitive shadow detection of [16]. The best scores are made bold.