Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3584954.3584962acmotherconferencesArticle/Chapter ViewAbstractPublication PagesniceConference Proceedingsconference-collections
research-article
Public Access

Neuromorphic Downsampling of Event-Based Camera Output

Published: 12 April 2023 Publication History

Abstract

In this work, we address the problem of training a neuromorphic agent to work on data from event-based cameras. Although event-based camera data is much sparser than standard video frames, the sheer number of events can make the observation space too complex to effectively train an agent. We construct multiple neuromorphic networks that downsample the camera data so as to make training more effective. We then perform a case study of training an agent to play the Atari Pong game by converting each frame to events and downsampling them. The final network combines both the downsampling and the agent. We discuss some practical considerations as well.

References

[1]
Filipp Akopyan, Jun Sawada, Andrew Cassidy, Rodrigo Alvarez-Icaza, John Arthur, Paul Merolla, Nabil Imam, Yutaka Nakamura, Pallab Datta, Gi-Joon Nam, 2015. Truenorth: Design and tool flow of a 65 mw 1 million neuron programmable neurosynaptic chip. IEEE transactions on computer-aided design of integrated circuits and systems 34, 10 (2015), 1537–1557.
[2]
Arnon Amir, Brian Taba, David Berg, Timothy Melano, Jeffrey McKinstry, Carmelo Di Nolfo, Tapan Nayak, Alexander Andreopoulos, Guillaume Garreau, Marcela Mendoza, Jeff Kusnitz, Michael Debole, Steve Esser, Tobi Delbruck, Myron Flickner, and Dharmendra Modha. 2017. A Low Power, Fully Event-Based Gesture Recognition System. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[3]
M. G. Bellemare, Y. Naddaf, J. Veness, and M. Bowling. 2013. The Arcade Learning Environment: An Evaluation Platform for General Agents. Journal of Artificial Intelligence Research 47 (jun 2013), 253–279.
[4]
Gregory Cohen. 2022. Gooaall!!!: Why we Built a Neuromorphic Robot to Play Foosball. IEEE Spectrum 59, 3 (2022), 44–50. https://doi.org/10.1109/MSPEC.2022.9729948
[5]
Mike Davies, Narayan Srinivasa, Tsung-Han Lin, Gautham Chinya, Yongqiang Cao, Sri Harsha Choday, Georgios Dimou, Prasad Joshi, Nabil Imam, Shweta Jain, 2018. Loihi: A neuromorphic manycore processor with on-chip learning. Ieee Micro 38, 1 (2018), 82–99.
[6]
A. Z. Foshie, C. Rizzo, H. Das, C. Zheng, J. S. Plank, and G. S. Rose. 2022. Benchmark Comparisons of Spike-Based Reconfigurable Neuroprocessor Architectures for Control Applications. In GLSVLSI - Great Lakes Symposium on VLSI (Irvine, CA). ACM, 383–386. https://doi.org/10.1145/3526241.3530381
[7]
Guillermo Gallego, Tobi Delbrück, Garrick Orchard, Chiara Bartolozzi, Brian Taba, Andrea Censi, Stefan Leutenegger, Andrew J Davison, Jörg Conradt, Kostas Daniilidis, 2020. Event-based vision: A survey. IEEE transactions on pattern analysis and machine intelligence 44, 1 (2020), 154–180.
[8]
Elvin Hajizada, Patrick Berggold, Massimiliano Iacono, Arren Glover, and Yulia Sandamirskaya. 2022. Interactive Continual Learning for Robots: A Neuromorphic Approach. In Proceedings of the International Conference on Neuromorphic Systems 2022 (Knoxville, TN, USA) (ICONS ’22). Association for Computing Machinery, New York, NY, USA, Article 1, 10 pages. https://doi.org/10.1145/3546790.3546791
[9]
Yuhuang Hu, Shih-Chii Liu, and Tobi Delbruck. 2021. v2e: From video frames to realistic DVS events. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 1312–1321.
[10]
Plotly Technologies Inc.2015. Collaborative data science. Montreal, QC. https://plot.ly
[11]
Hongmin Li, Hanchao Liu, Xiangyang Ji, Guoqi Li, and Luping Shi. 2017. Cifar10-dvs: an event-stream dataset for object classification. Frontiers in neuroscience 11 (2017), 309.
[12]
Marlos C. Machado, Marc G. Bellemare, Erik Talvitie, Joel Veness, Matthew J. Hausknecht, and Michael Bowling. 2018. Revisiting the Arcade Learning Environment: Evaluation Protocols and Open Problems for General Agents. Journal of Artificial Intelligence Research 61 (2018), 523–562.
[13]
Ana I Maqueda, Antonio Loquercio, Guillermo Gallego, Narciso García, and Davide Scaramuzza. 2018. Event-based vision meets deep learning on steering prediction for self-driving cars. In Proceedings of the IEEE conference on computer vision and pattern recognition. 5419–5427.
[14]
Anh Nguyen, Thanh-Toan Do, Darwin G Caldwell, and Nikos G Tsagarakis. 2019. Real-time 6DOF pose relocalization for event cameras with stacked spatial LSTM networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops. 0–0.
[15]
J. S. Plank, C. Rizzo, K. Shahat, G. Bruer, T. Dixon, M. Goin, G. Zhao, J. Anantharaj, C. D. Schuman, M. E. Dean, G. S. Rose, N. C. Cady, and J. Van Nostrand. 2019. The TENNLab Suite of LIDAR-Based Control Applications for Recurrent, Spiking, Neuromorphic Systems. In 44th Annual GOMACTech Conference. Albuquerque. http://neuromorphic.eecs.utk.edu/raw/files/publications/2019-Plank-Gomac.pdf
[16]
J. S. Plank, C. D. Schuman, G. Bruer, M. E. Dean, and G. S. Rose. 2018. The TENNLab Exploratory Neuromorphic Computing Framework. IEEE Letters of the Computer Society 1, 2 (July-Dec 2018), 17–20. https://doi.org/10.1109/LOCS.2018.2885976
[17]
J. S. Plank, J. Zhao, and B. Hurst. 2020. Reducing the Size of Spiking Convolutional Neural Networks by Trading Time for Space. In IEEE International Conference on Rebooting Computing (ICRC). IEEE, 115–126.
[18]
J. S. Plank, C. Zheng, C. D. Schuman, and C. Dean. 2021. Spiking Neuromorphic Networks for Binary Tasks. In International Conference on Neuromorphic Computing Systems (ICONS). ACM, 1–8.
[19]
C. P. Rizzo, C. D. Schuman, and J. S. Plank. 2022. Event-Based Camera Simulation Wrapper for Arcade Learning Environment. In International Conference on Neuromorphic Computing Systems (ICONS). ACM, 1–5.
[20]
Catherine D Schuman, Shruti R Kulkarni, Maryam Parsa, J Parker Mitchell, Bill Kay, 2022. Opportunities for neuromorphic computing algorithms and applications. Nature Computational Science 2, 1 (2022), 10–19.
[21]
C. D. Schuman, J. P. Mitchell, R. M. Patton, T. E. Potok, and J. S. Plank. 2020. Evolutionary Optimization for Neuromorphic Systems. In NICE: Neuro-Inspired Computational Elements Workshop.
[22]
C. D. Schuman, J. S. Plank, G. Bruer, and J. Anantharaj. 2019. Non-Traditional Input Encoding Schemes for Spiking Neuromorphic Systems. In IJCNN: The International Joint Conference on Neural Networks. Budapest, 1–10. https://doi.org/10.1109/IJCNN.2019.8852139
[23]
C. D. Schuman, C. Rizzo, J. McDonald-Carmack, N. Skuda, and J. S. Plank. 2022. Evaluating Encoding and Decoding Approaches for Spiking Neuromorphic Systems. In International Conference on Neuromorphic Computing Systems (ICONS). ACM, 1–10.
[24]
William Severa, Ojas Parekh, Kristofor D. Carlson, Conrad D. James, and James B. Aimone. 2016. Spiking network algorithms for scientific computing. In 2016 IEEE International Conference on Rebooting Computing (ICRC). 1–8. https://doi.org/10.1109/ICRC.2016.7738681
[25]
Bongki Son, Yunjae Suh, Sungho Kim, Heejae Jung, Jun-Seok Kim, Changwoo Shin, Keunju Park, Kyoobin Lee, Jinman Park, Jooyeon Woo, 2017. 4.1 A 640 × 480 dynamic vision sensor with a 9μ m pixel and 300Meps address-event representation. In 2017 IEEE International Solid-State Circuits Conference (ISSCC). IEEE, 66–67.
[26]
Stein Stroobants, Julien Dupeyroux, and Guido De Croon. 2022. Design and Implementation of a Parsimonious Neuromorphic PID for Onboard Altitude Control for MAVs Using Neuromorphic Processors. In Proceedings of the International Conference on Neuromorphic Systems 2022 (Knoxville, TN, USA) (ICONS ’22). Association for Computing Machinery, New York, NY, USA, Article 9, 7 pages. https://doi.org/10.1145/3546790.3546799
[27]
Gemma Taverni, Diederik Paul Moeys, Chenghan Li, Celso Cavaco, Vasyl Motsnyi, David San Segundo Bello, and Tobi Delbruck. 2018. Front and back illuminated dynamic and active pixel vision sensors comparison. IEEE Transactions on Circuits and Systems II: Express Briefs 65, 5 (2018), 677–681.
[28]
Lin Wang, Yo-Sung Ho, Kuk-Jin Yoon, 2019. Event-based high dynamic range image and very high frame rate video generation using conditional generative adversarial networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 10081–10090.
[29]
Alex Zihao Zhu, Liangzhe Yuan, Kenneth Chaney, and Kostas Daniilidis. 2018. EV-FlowNet: Self-supervised optical flow estimation for event-based cameras. arXiv preprint arXiv:1802.06898 (2018).

Cited By

View all
  • (2024)Simultaneous Velocity and Texture Classification from a Neuromorphic Tactile Sensor Using Spiking Neural NetworksElectronics10.3390/electronics1311215913:11(2159)Online publication date: 1-Jun-2024
  • (2024)A Neuromorphic System for the Real-time Classification of Natural Textures2024 IEEE International Conference on Robotics and Automation (ICRA)10.1109/ICRA57147.2024.10610401(1070-1076)Online publication date: 13-May-2024
  • (2023)Stakes of neuromorphic foveation: a promising future for embedded event camerasBiological Cybernetics10.1007/s00422-023-00974-9117:4-5(389-406)Online publication date: 21-Sep-2023

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
NICE '23: Proceedings of the 2023 Annual Neuro-Inspired Computational Elements Conference
April 2023
124 pages
ISBN:9781450399470
DOI:10.1145/3584954
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 12 April 2023

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. arcade learning environment
  2. event-based cameras
  3. preprocessing
  4. spiking neural networks

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Funding Sources

Conference

NICE 2023

Acceptance Rates

Overall Acceptance Rate 25 of 40 submissions, 63%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)135
  • Downloads (Last 6 weeks)25
Reflects downloads up to 13 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Simultaneous Velocity and Texture Classification from a Neuromorphic Tactile Sensor Using Spiking Neural NetworksElectronics10.3390/electronics1311215913:11(2159)Online publication date: 1-Jun-2024
  • (2024)A Neuromorphic System for the Real-time Classification of Natural Textures2024 IEEE International Conference on Robotics and Automation (ICRA)10.1109/ICRA57147.2024.10610401(1070-1076)Online publication date: 13-May-2024
  • (2023)Stakes of neuromorphic foveation: a promising future for embedded event camerasBiological Cybernetics10.1007/s00422-023-00974-9117:4-5(389-406)Online publication date: 21-Sep-2023

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media