Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/2393347.2396378acmconferencesArticle/Chapter ViewAbstractPublication PagesmmConference Proceedingsconference-collections
poster

Touch saliency

Published: 29 October 2012 Publication History

Abstract

In this work, we propose a new concept of touch saliency, and attempt to answer the question of whether the underlying image saliency map may be implicitly derived from the accumulative touch behaviors (or more specifically speaking, zoom-in and panning manipulations) when many users browse the image on smart mobile devices with multi-touch display of small size. The touch saliency maps are collected for the images of the recently released NUSEF dataset, and the preliminary comparison study demonstrates: 1) the touch saliency map is highly correlated with human eye fixation map for the same stimuli, yet compared to the latter, the touch data collection is much more flexible and requires no cooperation from users; and 2) the touch saliency is also well predictable by popular saliency detection algorithms. This study opens a new research direction of multimedia analysis by harnessing human touch information on increasingly popular multi-touch smart mobile devices.

References

[1]
T. Avraham and M. Lindenbaum. Esaliency (extended saliency): Meaningful attention using stochastic image modeling. IEEE Pattern Analysis and Machine Intelligence (TPAMI), 32, 2010.
[2]
N. D. Bruce and J. K. Tsotsos. Saliency based on information maximization. In Advances in Neural Information Processing Systems(NIPS), 2005.
[3]
N. D. B. Bruce and J. K. Tsotsos. Saliency, attention, and visual search: An information theoretic approach. Journal of Vision, 9:1--24, 2009.
[4]
K. Ehinger, B. Hidalgo-Sotelo, A. Torralba, and A. Oliva. Modeling search for people in 900 scenes. Visual Cognition, 17:945--978, 2009.
[5]
J. Harel, C. Koch, and P. Perona. Graph-based visual saliency. In B. Schölkopf, J. Platt, and T. Hoffman, editors, Advances in Neural Information Processing Systems(NIPS), pages 545--552. MIT Press, Cambridge, MA, 2007.
[6]
R. Hong, M. Wang, M. Xu, S. Yan, and T.-S. Chua. Dynamic captioning: Video accessibility enhancement for hearing impairment. In ACM International Conference on Multimedia (ACM MM), 2010.
[7]
R. Hong, M. Wang, X.-T. Yuan, M. Xu, J. Jiang, S. Yan, and T.-S. Chua. Video accessibility enhancement for hearing impaired users. ACM Transactions on Multimedia Computing, Communications, and Applications (TOMCCAP), 7S(1):24--42, 2011.
[8]
X. Hou and L. Zhang. Dynamic visual attention: Searching for coding length increments. In Advances in Neural Information Processing Systems(NIPS), 2008.
[9]
L. Itti, C. Koch, and E. Niebur. A model of saliency-based visual attention for rapid scene analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 20:1254--1259, 1998.
[10]
T. Judd, K. Ehinger, F. Durand, and A. Torralba. Learning to predict where humans look. In IEEE International Conference on Computer Vision (ICCV), 2009.
[11]
P. Lang, Bradley, M. M., Cuthbert, and B. N. International affective picture system (iaps): Affective ratings of pictures and instruction manual. In Technical Report A-8, University of Florida, Gainesville, FL., 2008.
[12]
T. Liu, J. Sun, N.-N. Zheng, X. Tang, and H.-Y. Shum. Learning to detect a salient object. In In Proc. IEEE Cont. on Computer Vision and Pattern Recognition (CVPR), 2007.
[13]
O. L. Meur, P. L. Callet, D. Barba, and D. Thoreau. A coherent computational approach to model the bottom-up visual attention. IEEE Pattern Analysis and Machine Intelligence (TPAMI), 28, 2006.
[14]
S. Ramanathan, H. Katti, N. Sebe, M. Kankanhalli, and T.-S. Chua. An eye fixation database for saliency detection in images. In ECCV, Crete, Greece, 2010.
[15]
V. Setlur, S. Takagi, R. Raskar, M. Gleicher, and B. Gooch. Automatic image retargeting. In In Proceedings of the 4th International Conference on Mobile and Ubiquitous Multimedia (MUM), 2005.
[16]
I. van der Linde, U. Rajashekar, A. C. Bovik, and L. K. Cormack. Doves: A database of visual eye movements. Spatial Vision, 22(2):161--177, 2009.
[17]
M. Wang, Y. Sheng, B. Liu, and X.-S. Hua. In-image accessibility indication. IEEE Transactions on Multimedia(TMM), 12(4):330--336, 2010.

Cited By

View all
  • (2023)Attention for Robot Touch: Tactile Saliency Prediction for Robust Sim-to-Real Tactile Control2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)10.1109/IROS55552.2023.10341888(10806-10812)Online publication date: 1-Oct-2023
  • (2016)Tactile mesh saliencyACM Transactions on Graphics10.1145/2897824.292592735:4(1-11)Online publication date: 11-Jul-2016
  • (2016)Dual Low-Rank Pursuit: Learning Salient Features for Saliency DetectionIEEE Transactions on Neural Networks and Learning Systems10.1109/TNNLS.2015.251339327:6(1190-1200)Online publication date: Jun-2016
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
MM '12: Proceedings of the 20th ACM international conference on Multimedia
October 2012
1584 pages
ISBN:9781450310895
DOI:10.1145/2393347
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 29 October 2012

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. fixations
  2. touch saliency
  3. visual saliency

Qualifiers

  • Poster

Conference

MM '12
Sponsor:
MM '12: ACM Multimedia Conference
October 29 - November 2, 2012
Nara, Japan

Acceptance Rates

Overall Acceptance Rate 2,145 of 8,556 submissions, 25%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)5
  • Downloads (Last 6 weeks)0
Reflects downloads up to 06 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2023)Attention for Robot Touch: Tactile Saliency Prediction for Robust Sim-to-Real Tactile Control2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)10.1109/IROS55552.2023.10341888(10806-10812)Online publication date: 1-Oct-2023
  • (2016)Tactile mesh saliencyACM Transactions on Graphics10.1145/2897824.292592735:4(1-11)Online publication date: 11-Jul-2016
  • (2016)Dual Low-Rank Pursuit: Learning Salient Features for Saliency DetectionIEEE Transactions on Neural Networks and Learning Systems10.1109/TNNLS.2015.251339327:6(1190-1200)Online publication date: Jun-2016
  • (2015)Discovering salient regions on 3D photo-textured mapsComputer Vision and Image Understanding10.1016/j.cviu.2014.07.006131:C(28-41)Online publication date: 1-Feb-2015
  • (2014)Touch Saliency: Characteristics and PredictionIEEE Transactions on Multimedia10.1109/TMM.2014.232927516:6(1779-1791)Online publication date: Oct-2014
  • (2014)Crowdsourced saliency for mining robotically gathered 3D maps using multitouch interaction on smartphones and tablets2014 IEEE International Conference on Robotics and Automation (ICRA)10.1109/ICRA.2014.6907748(6032-6039)Online publication date: May-2014
  • (2013)Learning image saliency from human touch behaviors2013 IEEE International Conference on Multimedia and Expo Workshops (ICMEW)10.1109/ICMEW.2013.6618249(1-4)Online publication date: Jul-2013

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media