EyeCoD: eye tracking system acceleration via flatcam-based algorithm & accelerator co-design
Proceedings of the 49th Annual International Symposium on Computer Architecture, 2022•dl.acm.org
Eye tracking has become an essential human-machine interaction modality for providing
immersive experience in numerous virtual and augmented reality (VR/AR) applications
desiring high throughput (eg, 240 FPS), small-form, and enhanced visual privacy. However,
existing eye tracking systems are still limited by their:(1) large form-factor largely due to the
adopted bulky lens-based cameras;(2) high communication cost required between the
camera and backend processor; and (3) potentially concerned low visual privacy, thus …
immersive experience in numerous virtual and augmented reality (VR/AR) applications
desiring high throughput (eg, 240 FPS), small-form, and enhanced visual privacy. However,
existing eye tracking systems are still limited by their:(1) large form-factor largely due to the
adopted bulky lens-based cameras;(2) high communication cost required between the
camera and backend processor; and (3) potentially concerned low visual privacy, thus …
Eye tracking has become an essential human-machine interaction modality for providing immersive experience in numerous virtual and augmented reality (VR/AR) applications desiring high throughput (e.g., 240 FPS), small-form, and enhanced visual privacy. However, existing eye tracking systems are still limited by their: (1) large form-factor largely due to the adopted bulky lens-based cameras; (2) high communication cost required between the camera and backend processor; and (3) potentially concerned low visual privacy, thus prohibiting their more extensive applications. To this end, we propose, develop, and validate a lensless FlatCambased eye tracking algorithm and accelerator co-design framework dubbed EyeCoD to enable eye tracking systems with a much reduced form-factor and boosted system efficiency without sacrificing the tracking accuracy, paving the way for next-generation eye tracking solutions. On the system level, we advocate the use of lensless FlatCams instead of lens-based cameras to facilitate the small form-factor need in mobile eye tracking systems, which also leaves rooms for a dedicated sensing-processor co-design to reduce the required camera-processor communication latency. On the algorithm level, EyeCoD integrates a predict-then-focus pipeline that first predicts the region-of-interest (ROI) via segmentation and then only focuses on the ROI parts to estimate gaze directions, greatly reducing redundant computations and data movements. On the hardware level, we further develop a dedicated accelerator that (1) integrates a novel workload orchestration between the aforementioned segmentation and gaze estimation models, (2) leverages intra-channel reuse opportunities for depth-wise layers, (3) utilizes input feature-wise partition to save activation memory size, and (4) develops a sequential-write-parallel-read input buffer to alleviate the bandwidth requirement for the activation global buffer. On-silicon measurement and extensive experiments validate that our EyeCoD consistently reduces both the communication and computation costs, leading to an overall system speedup of 10.95×, 3.21×, and 12.85× over general computing platforms including CPUs and GPUs, and a prior-art eye tracking processor called CIS-GEP, respectively, while maintaining the tracking accuracy. Codes are available at https://github.com/RICE-EIC/EyeCoD.
ACM Digital Library