Videoflow: Exploiting temporal cues for multi-frame optical flow estimation

X Shi, Z Huang, W Bian, D Li… - Proceedings of the …, 2023 - openaccess.thecvf.com
Proceedings of the IEEE/CVF International Conference on …, 2023openaccess.thecvf.com
We introduce VideoFlow, a novel optical flow estimation framework for videos. In contrast to
previous methods that learn to estimate optical flow from two frames, VideoFlow concurrently
estimates bi-directional optical flows for multiple frames that are available in videos by
sufficiently exploiting temporal cues. We first propose a TRi-frame Optical Flow (TROF)
module that estimates bi-directional optical flows for the center frame in a three-frame
manner. The information of the frame triplet is iteratively fused onto the center frame. To …
Abstract
We introduce VideoFlow, a novel optical flow estimation framework for videos. In contrast to previous methods that learn to estimate optical flow from two frames, VideoFlow concurrently estimates bi-directional optical flows for multiple frames that are available in videos by sufficiently exploiting temporal cues. We first propose a TRi-frame Optical Flow (TROF) module that estimates bi-directional optical flows for the center frame in a three-frame manner. The information of the frame triplet is iteratively fused onto the center frame. To extend TROF for handling more frames, we further propose a MOtion Propagation (MOP) module that bridges multiple TROFs and propagates motion features between adjacent TROFs. With the iterative flow estimation refinement, the information fused in individual TROFs can be propagated into the whole sequence via MOP. By effectively exploiting video information, VideoFlow presents extraordinary performance, ranking 1st on all public benchmarks. On the Sintel benchmark, VideoFlow achieves 1.649 and 0.991 average end-point-error (AEPE) on the final and clean passes, a 15.1% and 7.6% error reduction from the best published results (1.943 and 1.073 from FlowFormer++). On the KITTI-2015 benchmark, VideoFlow achieves an F1-all error of 3.65%, a 19.2% error reduction from the best published result (4.52% from FlowFormer++). Code is released at https://github. com/XiaoyuShi97/VideoFlow.
openaccess.thecvf.com