Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content
We present a novel method for monocular hand gesture recognition in ego-vision scenarios that deals with static and dynamic gestures and can achieve high accuracy results using a few positive samples. Specifically, we use and extend the... more
We present a novel method for monocular hand gesture recognition in ego-vision scenarios that deals with static and dynamic gestures and can achieve high accuracy results using a few positive samples. Specifically, we use and extend the dense trajectories approach that has been successfully introduced for action recognition. Dense features are extracted around regions selected by a new hand segmentation technique that integrates superpixel classification, temporal and spatial coherence. We extensively testour gesture recognition and segmentation algorithms on public datasets and propose a new dataset shot with a wearable camera. In addition, we demonstrate that our solution can work in near real-time on a wearable device.
ABSTRACT We present a novel method for monocular hand gesture recognition in ego-vision scenarios that deals with static and dynamic gestures and can achieve high accuracy results using a few positive samples. Specifically, we use and... more
ABSTRACT We present a novel method for monocular hand gesture recognition in ego-vision scenarios that deals with static and dynamic gestures and can achieve high accuracy results using a few positive samples. Specifically, we use and extend the dense trajectories approach that has been successfully introduced for action recognition. Dense features are extracted around regions selected by a new hand segmentation technique that integrates superpixel classification, temporal and spatial coherence. We extensively testour gesture recognition and segmentation algorithms on public datasets and propose a new dataset shot with a wearable camera. In addition, we demonstrate that our solution can work in near real-time on a wearable device.
We introduce a novel approach to cultural heritage experience: by means of ego-vision embedded devices we develop a system which offers a more natural and entertaining way of accessing museum knowledge. Our method is based on distributed... more
We introduce a novel approach to cultural heritage
experience: by means of ego-vision embedded devices we develop a system which offers a more natural and entertaining way of accessing museum knowledge. Our method is based on distributed self-gesture and artwork recognition, and does not need fixed cameras nor RFIDs sensors. We propose the use of dense trajectories sampled around the hand region to perform selfgesture recognition, understanding the way a user naturally
interacts with an artwork, and demonstrate that our approach
can benefit from distributed training. We test our algorithms
on publicly available datasets and we extend our experiments to
both virtual and real museum scenarios where our method shows robustness when challenged with real-world data. Furthermore, we run an extensive performance analysis on our ARM-based wearable device.
Research Interests: