Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

High-Fidelity Specular SVBRDF Acquisition From Flash Photographs

Published: 09 January 2023 Publication History

Abstract

Obtaining accurate SVBRDFs from 2D photographs of shiny, heterogeneous 3D objects is a highly sought-after goal for domains like cultural heritage archiving, where it is critical to document color appearance in high fidelity. In prior work such as the promising framework by Nam et al., the problem is simplified by assuming that specular highlights exhibit symmetry and isotropy about an estimated surface normal. The present work builds on this foundation with several significant modifications. Recognizing the importance of the surface normal as an axis of symmetry, we compare nonlinear optimization for normals with a linear approximation proposed by Nam et al. and find that nonlinear optimization is superior to the linear approximation, while noting that the surface normal estimates generally have a very significant impact on the reconstructed color appearance of the object. We also examine the use of a monotonicity constraint for reflectance and develop a generalization that also enforces continuity and smoothness when optimizing continuous monotonic functions like a microfacet distribution. Finally, we explore the impact of simplifying from an arbitrary 1D basis function to a traditional parametric microfacet distribution (GGX), and we find this to be a reasonable approximation that trades some fidelity for practicality in certain applications. Both representations can be used in existing rendering architectures like game engines or online 3D viewers, while retaining accurate color appearance for fidelity-critical applications like cultural heritage or online sales.

References

[1]
G. Nam, J. H. Lee, D. Gutierrez, and M. H. Kim, “Practical svbrdf acquisition of 3D objects with unstructured flash photography,” ACM Trans. Graph., vol. 37, no. 6, pp. 267:1–267:12, 2018.
[2]
M. Tetzlaff and G. Meyer, “Using flash photography and image-based rendering to document cultural heritage artifacts,” in Proc. Eurographics Workshop Graph. Cultural Heritage. Eurographics Assoc., 2016, pp. 137–146.
[3]
D. N. Wood et al., “Surface light fields for 3D photography,” in Proc. 27th Annu. Conf. Comput. Graph. Interactive Techn., 2000, pp. 287–296.
[4]
J. Filip and R. Vávra, “Fast method of sparse acquisition and reconstruction of view and illumination dependent datasets,” Comput. Graph., vol. 37, no. 5, pp. 376–388, 2013.
[5]
T. Zickler, R. Ramamoorthi, S. Enrique, and P. N. Belhumeur, “Reflectance sharing: Predicting appearance from a sparse set of images of a known shape,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 28, no. 8, pp. 1287–1302, 2006.
[6]
P. Sen and S. Darabi, “Compressive dual photography,” Comput. Graph. Forum, vol. 28, no. 2, pp. 609–618, 2009.
[7]
J. Wang, Y. Dong, X. Tong, Z. Lin, and B. Guo, “Kernel nyström method for light transport,” ACM Trans. Graph., vol. 28, no. 3, pp. 29:1–29:10, 2009.
[8]
P. Ren, Y. Dong, S. Lin, X. Tong, and B. Guo, “Image based relighting using neural networks,” ACM Trans. Graph., vol. 34, no. 4, pp. 111:1–111:12, 2015.
[9]
M. Aittala, T. Aila, and J. Lehtinen, “Reflectance modeling by neural texture synthesis,” ACM Trans. Graph., vol. 35, no. 4, pp. 65:1–65:13, 2016.
[10]
V. Deschaintre, M. Aittala, F. Durand, G. Drettakis, and A. Bousseau, “Single-image SVBRDF capture with a rendering-aware deep network,” ACM Trans. Graph., vol. 37, no. 4, pp. 128:1–128:15, 2018.
[11]
K. Kang, Z. Chen, J. Wang, K. Zhou, and H. Wu, “Efficient reflectance capture using an autoencoder,” ACM Trans. Graph., vol. 37, no. 4, pp. 127:1–127:10, 2018.
[12]
Z. Li, Z. Xu, R. Ramamoorthi, K. Sunkavalli, and M. Chandraker, “Learning to reconstruct shape and spatially-varying reflectance from a single image,” ACM Trans. Graph., vol. 37, no. 6, pp. 269:1–269:11, 2018.
[13]
Z. Xu, K. Sunkavalli, S. Hadap, and R. Ramamoorthi, “Deep image-based relighting from optimal sparse samples,” ACM Trans. Graph., vol. 37, no. 4, pp. 126:1–126:13, 2018.
[14]
D. Gao, X. Li, Y. Dong, P. Peers, K. Xu, and X. Tong, “Deep inverse rendering for high-resolution SVBRDF estimation from an arbitrary number of images,” ACM Trans. Graph., vol. 38, no. 4, pp. 134:1–134:15, 2019.
[15]
K. Guo et al., “The relightables: Volumetric performance capture of humans with realistic relighting,” ACM Trans. Graph., vol. 38, no. 6, pp. 217:1–217:19, 2019.
[16]
K. Kang et al., “Learning efficient illumination multiplexing for joint capture of reflectance and shape,” ACM Trans. Graph., vol. 38, no. 6, pp. 165:1–165:12, 2019.
[17]
A. Meka et al., “Deep reflectance fields: High-quality facial reflectance field inference from color gradient illumination,” ACM Trans. Graph., vol. 38, no. 4, pp. 77:1–77:12, 2019.
[18]
Z. Xu, S. Bi, K. Sunkavalli, S. Hadap, H. Su, and R. Ramamoorthi, “Deep view synthesis from sparse photometric images,” ACM Trans. Graph., vol. 38, no. 4, pp. 76:1–76:13, 2019.
[19]
M. Boss, V. Jampani, K. Kim, H. P. A. Lensch, and J. Kautz, “Two-shot spatially-varying brdf and shape estimation,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2020, pp. 3982–3991.
[20]
D. Gao, G. Chen, Y. Dong, P. Peers, K. Xu, and X. Tong, “Deferred neural lighting: Free-viewpoint relighting from unstructured photographs,” ACM Trans. Graph., vol. 39, no. 6, pp. 258:1–258:15, 2020.
[21]
V. Deschaintre, Y. Lin, and A. Ghosh, “Deep polarization imaging for 3D shape and svbrdf acquisition,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2021, pp. 15567–15576.
[22]
J. Guo et al., “Highlight-aware two-stream network for single-image svbrdf acquisition,” ACM Trans. Graph., vol. 40, no. 4, pp. 123:1–123:14, Jul. 2021.
[23]
X. Ma, K. Kang, R. Zhu, H. Wu, and K. Zhou, “Free-form scanning of non-planar appearance with neural trace photography,” ACM Trans. Graph., vol. 40, no. 4, pp. 124:1–124:13, 2021.
[24]
S. J. Garbin, M. Kowalski, M. Johnson, J. Shotton, and J. Valentin, “Fastnerf: High-fidelity neural rendering at 200FPS,” in Proc. IEEE/CVF Int. Conf. Comput. Vis., 2021, pp. 14346–14355.
[25]
P. Hedman, P. P. Srinivasan, B. Mildenhall, J. T. Barron, and P. Debevec, “Baking neural radiance fields for real-time view synthesis,” in proc. IEEE/CVF Int. Conf. Comput. Vis., 2021, pp. 5855–5864.
[26]
A. Gardner, C. Tchou, T. Hawkins, and P. Debevec, “Linear light source reflectometry,” ACM Trans. Graph., vol. 22, no. 3, pp. 749–758, 2003.
[27]
J. A. Paterson, D. Claus, and A. W. Fitzgibbon, “Brdf and geometry capture from extended inhomogeneous samples using flash photography,” Comput. Graph. Forum, vol. 24, no. 3, pp. 383–391, 2005.
[28]
M. Aittala, T. Weyrich, and J. Lehtinen, “Practical SVBRDF capture in the frequency domain,” ACM Trans. Graph., vol. 32, no. 4, pp. 110:1–110:12, 2013.
[29]
M. Aittala, T. Weyrich, and J. Lehtinen, “Two-shot SVBRDF capture for stationary materials,” ACM Trans. Graph., vol. 34, no. 4, pp. 110:1–110:13, 2015.
[30]
J. Riviere, P. Peers, and A. Ghosh, “Mobile surface reflectometry,” Comput. Graph. Forum, vol. 35, no. 1, pp. 191–202, 2015.
[31]
P. Henzler, V. Deschaintre, N. J. Mitra, and T. Ritschel, “Generative modelling of BRDF textures from flash images,” ACM Trans. Graph., vol. 40, no. 6, pp. 284:1–284:13, 2021.
[32]
M. M. Bagher, C. Soler, and N. Holzschuch, “Accurate fitting of measured reflectances using a shifted gamma micro-facet distribution,” Comput. Graph. Forum, vol. 31, no. 4, pp. 1509–1518, 2012.
[33]
J. Dupuy, E. Heitz, J.-C. Iehl, P. Poulin, and V. Ostromoukhov, “Extracting microfacet-based BRDF parameters from arbitrary materials with power iterations,” Comput. Graph. Forum, vol. 34, no. 4, pp. 21–30, 2015.
[34]
Z. Zhou et al., “Sparse-as-possible SVBRDF acquisition,” ACM Trans. Graph., vol. 35, no. 6, pp. 189:1–189:12, 2016.
[35]
H. Lensch, J. Kautz, M. Goesele, W. Heidrich, and H.-P. Seidel, “Image-based reconstruction of spatial appearance and geometric detail,” ACM Trans. Graph., vol. 22, no. 2, pp. 234–257, 2003.
[36]
B. Tunwattanapong et al., “Acquiring reflectance and shape from continuous spherical harmonic illumination,” ACM Trans. Graph., vol. 32, no. 4, pp. 109:1–109:12, 2013.
[37]
G. Fyffe, P. Graham, B. Tunwattanapong, A. Ghosh, and P. Debevec, “Near-instant capture of high-resolution facial geometry and reflectance,” Comput. Graph. Forum, vol. 35, no. 2, pp. 353–363, 2016.
[38]
K. Nishino, K. Ikeuchi, and Z. Zhang, “Re-rendering from a sparse set of images,” Dep. Computer Science, Drexel Univ., Tech. Rep., 2005.
[39]
G. Palma, M. Callieri, M. Dellepiane, and R. Scopigno, “A statistical method for SVBRDF approximation from video sequences in general lighting conditions,” Comput. Graph. Forum, vol. 31, no. 4, pp. 1491–1500, 2012.
[40]
Y. Dong, G. Chen, P. Peers, J. Zhang, and X. Tong, “Appearance-from-motion: Recovering spatially varying surface reflectance under unknown lighting,” ACM Trans. Graph., vol. 33, no. 6, pp. 187:1–187:12, 2014.
[41]
R. Xia, Y. Dong, P. Peers, and X. Tong, “Recovering shape and spatially-varying surface reflectance under unknown illumination,” ACM Trans. Graph., vol. 35, no. 6, pp. 187:1–187:12, 2016.
[42]
P.P. Srinivasan, B. Deng, X. Zhang, M. Tancik, B. Mildenhall, and J. T. Barron, “NeRV: Neural reflectance and visibility fields for relighting and view synthesis,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2021, pp. 7495–7504.
[43]
X. Zhang, P. P. Srinivasan, B. Deng, P. Debevec, W. T. Freeman, and J. T. Barron, “Nerfactor: Neural factorization of shape and reflectance under an unknown illumination,” ACM Trans. Graph., vol. 40, no. 6, pp. 237:1–237:18, 2021.
[44]
A. Ghosh, T. Chen, P. Peers, C. A. Wilson, and P. Debevec, “Estimating specular roughness and anisotropy from second order spherical gradient illumination,” Comput. Graph. Forum, vol. 28, no. 4, pp. 1161–1170, 2009.
[45]
J. Riviere, I. Reshetouski, L. Filipi, and A. Ghosh, “Polarization imaging reflectometry in the wild,” ACM Trans. Graph., vol. 36, no. 6, pp. 206:1–206:14, 2017.
[46]
S. Laine, J. Hellsten, T. Karras, Y. Seol, J. Lehtinen, and T. Aila, “Modular primitives for high-performance differentiable rendering,” ACM Trans. Graph., vol. 39, no. 6, pp. 194:1–194:14, 2020.
[47]
Z. Hui, K. Sunkavalli, J.-Y. Lee, S. Hadap, J. Wang, and A. C. Sankaranarayanan, “Reflectance capture using univariate sampling of brdfs,” in Proc. IEEE Int. Conf. Comput. Vis., 2017, pp. 5362–5370.
[48]
S. Bi et al., “Deep reflectance volumes: Relightable reconstructions from multi-view photometric images,” in Proc. Eur. Conf. Comput. Vis., 2020, pp. 294–311.
[49]
S. Bi et al., “Neural reflectance fields for appearance acquisition,” ACM Trans. Graph., vol. 38, no. 4, pp. 39:1–39:11, 2020.
[50]
C. S. McCamy et al., “A color-rendition chart,” J. Appl. Photographic Eng., vol. 2, no. 3, pp. 95–99, 1976.
[51]
P. Debevec, Y. Yu, and G. Borshukov, “Efficient view-dependent image-based rendering with projective texture-mapping” in Proc. Eurographics Workshop Rendering Techn., 1998, pp. 105–116.
[52]
C. Everitt, “Projective texture mapping,” White paper, NVidia Corporation, 2001. [Online]. Available: https://web.archive.org/web/20191214200720
[53]
R. L. Cook and K. E. Torrance, “A reflectance model for computer graphics,” ACM Trans. Graph., vol. 1, no. 1, pp. 7–24, 1982.
[54]
G. Nam, J. H. Lee, H. Wu, D. Gutierrez, and M. H. Kim, “Simultaneous acquisition of microscale reflectance and normals,” ACM Trans. Graph., vol. 35, no. 6, pp. 185:1–185:11, 2016.
[55]
E. Heitz, “Understanding the masking-shadowing function in microfacet-based brdfs,” J. Comput. Graph. Techn., vol. 3, no. 2, pp. 32–91, 2014.
[56]
K. Levenberg, “A method for the solution of certain non-linear problems in least squares,” Quart. Appl. Math., vol. 2, no. 2, pp. 164–168, 1944.
[57]
D. W. Marquardt, “An algorithm for least-squares estimation of nonlinear parameters,” J. Soc. Ind. Appl. Math., vol. 11, no. 2, pp. 431–441, 1963.
[58]
Khronos Group, ”smoothstep,” 2014. [Online]. Available: https://www.khronos.org/registry/OpenGL-Refpages/gl4/html/smoothstep.xhtml
[59]
M. Ashikhmin and A. Ghosh, “Simple blurry reflections with environment maps,” J. Graph. Tools, vol. 7, no. 4, pp. 3–8, 2002.
[60]
T. Trowbridge and K. P. Reitz, “Average irregularity representation of a rough surface for ray reflection,” J. Opt. Soc. Amer., vol. 65, no. 5, pp. 531–536, 1975.
[61]
B. Walter, S. R. Marschner, H. Li, and K. E. Torrance, “Microfacet models for refraction through rough surfaces,” in Proc. 18th Eurographics Conf. Rendering Techn., 2007, pp. 195–206.

Index Terms

  1. High-Fidelity Specular SVBRDF Acquisition From Flash Photographs
        Index terms have been assigned to the content through auto-classification.

        Recommendations

        Comments

        Information & Contributors

        Information

        Published In

        cover image IEEE Transactions on Visualization and Computer Graphics
        IEEE Transactions on Visualization and Computer Graphics  Volume 30, Issue 4
        April 2024
        170 pages

        Publisher

        IEEE Educational Activities Department

        United States

        Publication History

        Published: 09 January 2023

        Qualifiers

        • Research-article

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • 0
          Total Citations
        • 0
          Total Downloads
        • Downloads (Last 12 months)0
        • Downloads (Last 6 weeks)0
        Reflects downloads up to 18 Aug 2024

        Other Metrics

        Citations

        View Options

        View options

        Get Access

        Login options

        Media

        Figures

        Other

        Tables

        Share

        Share

        Share this Publication link

        Share on social media