Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article
Open access

LumNet: Learning to Estimate Vertical Visual Field Luminance for Adaptive Lighting Control

Published: 24 June 2021 Publication History
  • Get Citation Alerts
  • Abstract

    High-quality lighting positively influences visual performance in humans. The experienced visual performance can be measured using desktop luminance and hence several lighting control systems have been developed for its quantification. However, the measurement devices that are used to monitor the desktop luminance in existing lighting control systems are obtrusive to the users. As an alternative, ceiling-based luminance projection sensors are being used recently as these are unobtrusive and can capture the direct task area of a user. The positioning of these devices on the ceiling requires to estimate the desktop luminance in the user's vertical visual field, solely using ceiling-based measurements, to better predict the experienced visual performance of the user. For this purpose, we present LUMNET, an approach for estimating desktop luminance with deep models through utilizing supervised and self-supervised learning. Our model learns visual representations from ceiling-based images, which are collected in indoor spaces within the physical vicinity of the user to predict average desktop luminance as experienced in a real-life setting. We also propose a self-supervised contrastive method for pre-training LUMNET with unlabeled data and we demonstrate that the learned features are transferable onto a small labeled dataset which minimizes the requirement of costly data annotations. Likewise, we perform experiments on domain-specific datasets and show that our approach significantly improves over the baseline results from prior methods in estimating luminance, particularly in the low-data regime. LUMNET is an important step towards learning-based technique for luminance estimation and can be used for adaptive lighting control directly on-device thanks to its minimal computational footprint with an added benefit of preserving user's privacy.

    References

    [1]
    Philip Bachman, R Devon Hjelm, and William Buchwalter. 2019. Learning representations by maximizing mutual information across views. In Advances in Neural Information Processing Systems. 15535--15545.
    [2]
    Ziad Boulos, Scott S Campbell, Alfred J Lewy, Michael Terman, Derk-Jan Dijk, and Charmane I Eastman. 1995. Light treatment for sleep disorders: consensus report: VII. Jet lag. Journal of biological rhythms 10, 2 (1995), 167--176.
    [3]
    PR Boyce. 1973. Age, illuminance, visual performance and preference. Lighting Research & Technology 5, 3 (1973), 125--144.
    [4]
    Peter R Boyce, Jennifer A Veitch, Guy R Newsham, CC Jones, J Heerwagen, M Myer, and CM Hunter. 2006. Occupant use of switching and dimming controls in offices. Lighting Research & Technology 38, 4 (2006), 358--376.
    [5]
    Mathilde Caron, Piotr Bojanowski, Armand Joulin, and Matthijs Douze. 2018. Deep clustering for unsupervised learning of visual features. In Proceedings of the European Conference on Computer Vision (ECCV). 132--149.
    [6]
    Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. 2020. A simple framework for contrastive learning of visual representations. arXiv preprint arXiv:2002.05709 (2020).
    [7]
    Antonia Creswell, Kai Arulkumaran, and Anil A Bharath. 2017. On denoising autoencoders trained to minimise binary cross-entropy. arXiv preprint arXiv:1708.08487 (2017).
    [8]
    Ekin D Cubuk, Barret Zoph, Dandelion Mane, Vijay Vasudevan, and Quoc V Le. 2018. Autoaugment: Learning augmentation policies from data. arXiv preprint arXiv:1805.09501 (2018).
    [9]
    Christopher Cuttle. 2004. Brightness, lightness, and providing 'a preconceived appearance to the interior'. Lighting Research & Technology 36, 3 (2004), 201--214.
    [10]
    Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition. Ieee, 248--255.
    [11]
    Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018).
    [12]
    LT Doulos, A Tsangrassoulis, CA Bouroussis, and FV Topalis. 2013. Reviewing drawbacks of conventional photosensors: are CCD/CMOS sensors the next generation?. In Lux Europa 2013, 12th European Lighting Conference.
    [13]
    Tao Fang. 2009. On performance of lossless compression for HDR image quantized in color space. Signal Processing: Image Communication 24, 5 (2009), 397--404.
    [14]
    Christoph Feichtenhofer, Haoqi Fan, Jitendra Malik, and Kaiming He. 2019. Slowfast networks for video recognition. In Proceedings of the IEEE international conference on computer vision. 6202--6211.
    [15]
    Spyros Gidaris, Praveer Singh, and Nikos Komodakis. 2018. Unsupervised representation learning by predicting image rotations. arXiv preprint arXiv:1803.07728 (2018).
    [16]
    Varun Gulshan, Lily Peng, Marc Coram, Martin C Stumpe, Derek Wu, Arunachalam Narayanaswamy, Subhashini Venugopalan, Kasumi Widner, Tom Madams, Jorge Cuadros, et al. 2016. Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. Jama 316, 22 (2016), 2402--2410.
    [17]
    Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. 2020. Momentum contrast for unsupervised visual representation learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.
    [18]
    Mehlika N Inanici. 2006. Evaluation of high dynamic range photography as a luminance data acquisition system. Lighting Research & Technology 38, 2 (2006), 123--134.
    [19]
    Douglas A Kerr. 2010. The CIE XYZ and xyY color spaces. Colorimetry 1, 1 (2010), 1--16.
    [20]
    Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014).
    [21]
    Diederik P Kingma and Max Welling. 2013. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013).
    [22]
    Ryan Kiros, Yukun Zhu, Russ R Salakhutdinov, Richard Zemel, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Skip-thought vectors. In Advances in neural information processing systems. 3294--3302.
    [23]
    Alexander Kolesnikov, Xiaohua Zhai, and Lucas Beyer. 2019. Revisiting self-supervised visual representation learning. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition. 1920--1929.
    [24]
    Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. 2012. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems. 1097--1105.
    [25]
    Thijs Willem Kruisselbrink. 2020. Practical and continuous luminance distribution measurements for lighting quality. Ph.D. Dissertation. Department of the Built Environment. Proefschrift.
    [26]
    Thijs Kruisselbrink, Myriam Aries, and Alexander Rosemann. 2017. A practical device for measuring the luminance distribution. International Journal of Sustainable Lighting 19, 1 (2017), 75--90.
    [27]
    Thijs Kruisselbrink, Rajendra Dangol, and Evert van Loenen. 2019. Ceiling-based luminance measurements: a feasible solution? (2019).
    [28]
    Thijs W Kruisselbrink, Rajendra Dangol, and Evert J van Loenen. 2020. Feasibility of ceiling-based luminance distribution measurements. Building and Environment 172 (2020), 106699.
    [29]
    Rikard Küller and Lennart Wetterberg. 1993. Melatonin, cortisol, EEG, ECG and subjective comfort in healthy humans: Impact of two fluorescent lamp types at two light intensities. International Journal of Lighting Research and Technology 25, 2 (1993), 71--80.
    [30]
    Yann LeCun, Bernhard Boser, John S Denker, Donnie Henderson, Richard E Howard, Wayne Hubbard, and Lawrence D Jackel. 1989. Backpropagation applied to handwritten zip code recognition. Neural computation 1, 4 (1989), 541--551.
    [31]
    Yi Li, Haozhi Qi, Jifeng Dai, Xiangyang Ji, and Yichen Wei. 2017. Fully convolutional instance-aware semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2359--2367.
    [32]
    Jiasen Lu, Caiming Xiong, Devi Parikh, and Richard Socher. 2017. Knowing When to Look: Adaptive Attention via A Visual Sentinel for Image Captioning. arXiv:cs.CV/1612.01887
    [33]
    Olvi L Mangasarian and David R. Musicant. 2000. Robust linear and support vector regression. IEEE Transactions on Pattern Analysis and Machine Intelligence 22, 9 (2000), 950--955.
    [34]
    Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. 2015. Human-level control through deep reinforcement learning. nature 518, 7540 (2015), 529--533.
    [35]
    T Moore, DJ Carter, and AI Slater. 2002. A field study of occupant controlled lighting in offices. Lighting Research & Technology 34, 3 (2002), 191--202.
    [36]
    Ali Motamed, Laurent Deschamps, and Jean-louis Scartezzini. 2015. Validation and preliminary experiments of embedded discomfort glare assessment through a novel HDR vision sensor. In Proceedings of International Conference CISBAT 2015 Future Buildings and Districts Sustainability from Nano to Urban Scale. LESO-PB, EPFL, 235--240.
    [37]
    Ali Motamed, Laurent Deschamps, and Jean-Louis Scartezzini. 2017. On-site monitoring and subjective comfort assessment of a sun shadings and electric lighting controller based on novel High Dynamic Range vision sensors. Energy and Buildings 149 (2017), 58--72.
    [38]
    Hieu V Nguyen and Li Bai. 2010. Cosine similarity metric learning for face verification. In Asian conference on computer vision. Springer, 709--720.
    [39]
    Aaron van den Oord, Yazhe Li, and Oriol Vinyals. 2018. Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748 (2018).
    [40]
    Taesung Park, Alexei A Efros, Richard Zhang, and Jun-Yan Zhu. 2020. Contrastive learning for unpaired image-to-image translation. arXiv preprint arXiv:2007.15651 (2020).
    [41]
    Timo Partonen and Jouko Lönnqvist. 2000. Bright light improves vitality and alleviates distress in healthy people. Journal of Affective disorders 57, 1-3 (2000), 55--61.
    [42]
    S Patro and Kishore Kumar Sahu. 2015. Normalization: A preprocessing stage. arXiv preprint arXiv:1503.06462 (2015).
    [43]
    Alec Radford, Luke Metz, and Soumith Chintala. 2015. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434 (2015).
    [44]
    Pranav Rajpurkar, Jeremy Irvin, Kaylie Zhu, Brandon Yang, Hershel Mehta, Tony Duan, Daisy Ding, Aarti Bagul, Curtis Langlotz, Katie Shpanskaya, Matthew P. Lungren, and Andrew Y. Ng. 2017. CheXNet: Radiologist-Level Pneumonia Detection on Chest X-Rays with Deep Learning. arXiv:cs.CV/1711.05225
    [45]
    Mark S Rea and Michael J Ouellette. 1991. Relative visual performance: A basis for application. Lighting Research & Technology 23, 3 (1991), 135--144.
    [46]
    Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. 2015. Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in neural information processing systems. 91--99.
    [47]
    Aaqib Saeed, Victor Ungureanu, and Beat Gfeller. 2020. Sense and Learn: Self-Supervision for Omnipresent Sensors. arXiv preprint arXiv:2009.13233 (2020).
    [48]
    Pierre Sermanet, David Eigen, Xiang Zhang, Michaël Mathieu, Rob Fergus, and Yann LeCun. 2013. Overfeat: Integrated recognition, localization and detection using convolutional networks. arXiv preprint arXiv:1312.6229 (2013).
    [49]
    Pierre Sermanet, Corey Lynch, Yevgen Chebotar, Jasmine Hsu, Eric Jang, Stefan Schaal, Sergey Levine, and Google Brain. 2018. Time-contrastive networks: Self-supervised learning from video. In 2018 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 1134--1141.
    [50]
    Ali Sharif Razavian, Hossein Azizpour, Josephine Sullivan, and Stefan Carlsson. 2014. CNN features off-the-shelf: an astounding baseline for recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition workshops. 806--813.
    [51]
    Karen Simonyan and Andrew Zisserman. 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014).
    [52]
    Robert H Spector. 1990. Visual fields. Clinical methods: The history, physical, and laboratory examinations (1990).
    [53]
    Aravind Srinivas, Michael Laskin, and Pieter Abbeel. 2020. CURL: Contrastive Unsupervised Representations for Reinforcement Learning. arXiv:cs.LG/2004.04136
    [54]
    Chuanqi Tan, Fuchun Sun, Tao Kong, Wenchang Zhang, Chao Yang, and Chunfang Liu. 2018. A survey on deep transfer learning. In International conference on artificial neural networks. Springer, 270--279.
    [55]
    WJM Van Bommel and GJ Van den Beld. 2004. Lighting for work: a review of visual and biological effects. Lighting research & technology 36, 4 (2004), 255--266.
    [56]
    Kevin Van Den Wymelenberg and Mehlika Inanici. 2014. A critical investigation of common lighting design metrics for predicting human visual comfort in offices with daylight. Leukos 10, 3 (2014), 145--164.
    [57]
    Jennifer A Veitch and Guy R Newsham. 2000. Preferred luminous conditions in open-plan offices: Research and practice recommendations. International Journal of Lighting Research and Technology 32, 4 (2000), 199--212.
    [58]
    Jennifer A Veitch, Guy R Newsham, Peter R Boyce, and CC Jones. 2008. Lighting appraisal, well-being and performance in open-plan offices: A linked mechanisms approach. Lighting Research & Technology 40, 2 (2008), 133--151.
    [59]
    Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. 2015. Show and tell: A neural image caption generator. In Proceedings of the IEEE conference on computer vision and pattern recognition. 3156--3164.
    [60]
    Xiaolong Wang and Abhinav Gupta. 2015. Unsupervised learning of visual representations using videos. In Proceedings of the IEEE international conference on computer vision. 2794--2802.
    [61]
    Xiaosong Wang, Yifan Peng, Le Lu, Zhiyong Lu, Mohammadhadi Bagheri, and Ronald M Summers. 2017. Chestx-ray8: Hospital-scale chest x-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases. In Proceedings of the IEEE conference on computer vision and pattern recognition. 2097--2106.
    [62]
    Greg Ward. [n.d.]. Anyhere Software. http://www.anyhere.com/
    [63]
    Zhirong Wu, Yuanjun Xiong, Stella Yu, and Dahua Lin. 2018. Unsupervised Feature Learning via Non-Parametric Instance-level Discrimination. arXiv:cs.CV/1805.01978
    [64]
    Richard Zhang, Phillip Isola, and Alexei A Efros. 2016. Colorful image colorization. In European conference on computer vision. Springer, 649--666.
    [65]
    Richard Zhang, Phillip Isola, and Alexei A Efros. 2017. Split-brain autoencoders: Unsupervised learning by cross-channel prediction. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 1058--1067.

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies
    Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies  Volume 5, Issue 2
    June 2021
    932 pages
    EISSN:2474-9567
    DOI:10.1145/3472726
    Issue’s Table of Contents
    This work is licensed under a Creative Commons Attribution International 4.0 License.

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 24 June 2021
    Published in IMWUT Volume 5, Issue 2

    Check for updates

    Author Tags

    1. HDR
    2. adaptive lighting
    3. ambient intelligence
    4. deep learning
    5. luminance estimation
    6. self-supervised learning

    Qualifiers

    • Research-article
    • Research
    • Refereed

    Funding Sources

    • Optilight, 14671

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • 0
      Total Citations
    • 496
      Total Downloads
    • Downloads (Last 12 months)110
    • Downloads (Last 6 weeks)7

    Other Metrics

    Citations

    View Options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Get Access

    Login options

    Full Access

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media