Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/1076034.1076062acmconferencesArticle/Chapter ViewAbstractPublication PagesirConference Proceedingsconference-collections
Article

Combining eye movements and collaborative filtering for proactive information retrieval

Published: 15 August 2005 Publication History

Abstract

We study a new task, proactive information retrieval by combining implicit relevance feedback and collaborative filtering. We have constructed a controlled experimental setting, a prototype application, in which the users try to find interesting scientific articles by browsing their titles. Implicit feedback is inferred from eye movement signals, with discriminative hidden Markov models estimated from existing data in which explicit relevance feedback is available. Collaborative filtering is carried out using the User Rating Profile model, a state-of-the-art probabilistic latent variable model, computed using Markov Chain Monte Carlo techniques. For new document titles the prediction accuracy with eye movements, collaborative filtering, and their combination was significantly better than by chance. The best prediction accuracy still leaves room for improvement but shows that proactive information retrieval and combination of many sources of relevance feedback is feasible.

References

[1]
J. Basilico and T. Hofmann. Unifying collaborative and content-based filtering. In Proceedings of ICML'04, Twenty-first International Conference on Machine Learning. ACM Press, New York, 2004.
[2]
D. Blei, A. Y. Ng, and M. I. Jordan. Latent Dirichlet allocation. Journal of Machine Learning Research, 3:993--1022, 2003.
[3]
W. Buntine. Variational extensions to EM and multinomial PCA. In T. Elomaa, H. Mannila, and H. Toivonen, editors,Proceedings of ECML'02, 13th European Conference on Machine Learning, pages 23--34. Springer, Berlin, 2002.
[4]
T. Hofmann. Latent semantic models for collaborative filtering. ACM Transactions on Information Systems, 22:89--115, 2004.
[5]
A. Hyrskykari, P. Majaranta, and K.-J. Räihä. Proactive response to eye movements. In G. W. M. Rauterberg, M. Menozzi, and J. Wesson, editors, INTERACT'03, pages 129--136. IOS Press, 2003.
[6]
http://www.pascal-network.org/Challenges/IREM/.
[7]
D. Kelly and J. Teevan. Implicit feedback for inferring user preference: a bibliography. SIGIR Forum, 37(2):18--28, 2003.
[8]
J. Konstan, B. Miller, D. Maltz, and J. Herlocker. Grouplens: Applying collaborative filtering to usenet news. Communications of the ACM, 40(3):77--87, 1997.
[9]
C. L. Lisetti and F. Nasoz. Maui: a multimodal affective user interface. In MULTIMEDIA'02: Proceedings of the tenth ACM international conference on Multimedia, pages 161--170. ACM Press, 2002.
[10]
P. P. Maglio, R. Barrett, C. S. Campbell, and T. Selker. Suitor: an attentive information system. In Proceedings of the 5th International Conference on Intelligent User Interfaces, pages 169--176. ACM Press, 2000.
[11]
P. P. Maglio and C. S. Campbell. Attentive agents. Communications of the ACM, 46(3):47--51, 2003.
[12]
B. Marlin. Modeling user rating profiles for collaborative filtering. In Advances in Neural Information Processing Systems 16. MIT Press, Cambridge, MA, 2004.
[13]
T. Partala, V. Surakka, and T. Vanhala. Person-independent estimation of emotional experiences from facial expressions. In IUI'05: Proceedings of the 10th International Conference on Intelligent User Interfaces, pages 246--248. ACM Press, 2005.
[14]
A. Popescul, L. Ungar, D. Pennock, and S. Lawrence. Probabilistic models for unified collaborative and content-based recommendation in sparse-data environments. In Proceedings of UAI-2001, pages 437--444. Morgan Kaufmann, 2001.
[15]
D. Povey, P. Woodland, and M. Gales. Discriminative map for acoustic model adaptation. In IEEE International Conference on Acoustics, Speech, and Signal Processing, 2003. Proceedings. (ICASSP'03), volume 1, pages 312--315, 2003.
[16]
J. K. Pritchard, M. Stephens, and P. Donnelly. Inference of population structure using multilocus genotype data. Genetics, 155:945--59, 2000.
[17]
L. R. Rabiner. A tutorial on hidden Markov models and selected applications in speech recognition. Proceedings of the IEEE, 77:257--286, 1989.
[18]
K. Rayner. Eye movements in reading and information processing: 20 years of research. Psychological Bulletin, 124:372--422, 1998.
[19]
J. Salojärvi, I. Kojo, J. Simola, and S. Kaski. Can relevance be inferred from eye movements in information retrieval? In Proceedings of WSOM'03, Workshop on Self-Organizing Maps, pages 261--266. Kyushu Institute of Technology, Kitakyushu, Japan, 2003.
[20]
J. Salojärvi, K. Puolamäki, and S. Kaski. Relevance feedback from eye movements for proactive information retrieval. In J. Heikkilä, M. Pietikäinen, and O. Silvén, editors, Workshop on Processing Sensory Information for Proactive Systems (PSIPS 2004), Oulu, Finland, 2004.
[21]
D. D. Salvucci and J. R. Anderson. Automated eye-movement protocol analysis. Human-Computer Interaction, 16:39--86, 2001.
[22]
U. Shardanand and P. Maes. Social information filtering: Algorithms for automating `word of mouth'. In Proceedings of Computer Human Interaction, pages 210--217, 1995.
[23]
I. Starker and R. A. Bolt. A gaze-responsive self-disclosing display. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pages 3--10. ACM Press, 1990.
[24]
A. Viterbi. Error bounds for convolutional codes and an asymptotically optimum decoding algorithm. IEEE Transactions on Information Theory, 13:260--269, 1967.
[25]
D. J. Ward and D. J. MacKay. Fast hands-free writing by gaze direction. Nature, 418:838, 2002.

Cited By

View all
  • (2022)A reference dependence approach to enhancing early prediction of session behavior and satisfactionProceedings of the 22nd ACM/IEEE Joint Conference on Digital Libraries10.1145/3529372.3533294(1-5)Online publication date: 20-Jun-2022
  • (2020)Inferring Intent and Action from Gaze in Naturalistic BehaviorCognitive Analytics10.4018/978-1-7998-2460-2.ch074(1464-1482)Online publication date: 2020
  • (2020)The Role of Word-Eye-Fixations for Query Term PredictionProceedings of the 2020 Conference on Human Information Interaction and Retrieval10.1145/3343413.3378010(422-426)Online publication date: 14-Mar-2020
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
SIGIR '05: Proceedings of the 28th annual international ACM SIGIR conference on Research and development in information retrieval
August 2005
708 pages
ISBN:1595930345
DOI:10.1145/1076034
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 15 August 2005

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. collaborative filtering
  2. eye movements
  3. hidden Markov model
  4. latent variable model
  5. mixture model
  6. proactive information retrieval
  7. relevance feedback

Qualifiers

  • Article

Conference

SIGIR05
Sponsor:

Acceptance Rates

Overall Acceptance Rate 792 of 3,983 submissions, 20%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)9
  • Downloads (Last 6 weeks)3
Reflects downloads up to 01 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2022)A reference dependence approach to enhancing early prediction of session behavior and satisfactionProceedings of the 22nd ACM/IEEE Joint Conference on Digital Libraries10.1145/3529372.3533294(1-5)Online publication date: 20-Jun-2022
  • (2020)Inferring Intent and Action from Gaze in Naturalistic BehaviorCognitive Analytics10.4018/978-1-7998-2460-2.ch074(1464-1482)Online publication date: 2020
  • (2020)The Role of Word-Eye-Fixations for Query Term PredictionProceedings of the 2020 Conference on Human Information Interaction and Retrieval10.1145/3343413.3378010(422-426)Online publication date: 14-Mar-2020
  • (2020)Tracking the Progression of Reading Using Eye-Gaze Point Measurements and Hidden Markov ModelsIEEE Transactions on Instrumentation and Measurement10.1109/TIM.2020.298352569:10(7857-7868)Online publication date: Oct-2020
  • (2020)Tool for image annotation based on gaze2020 International Conference on Signal Processing and Communications (SPCOM)10.1109/SPCOM50965.2020.9179496(1-5)Online publication date: Jul-2020
  • (2019)Tracking the Progression of Reading Through Eye-gaze Measurements2019 22th International Conference on Information Fusion (FUSION)10.23919/FUSION43075.2019.9011436(1-8)Online publication date: Jul-2019
  • (2019)Integrating neurophysiologic relevance feedback in intent modeling for information retrievalJournal of the Association for Information Science and Technology10.1002/asi.2416170:9(917-930)Online publication date: 2-Aug-2019
  • (2018)Procrastination is the Thief of TimeThe 41st International ACM SIGIR Conference on Research & Development in Information Retrieval10.1145/3209978.3210114(1157-1160)Online publication date: 27-Jun-2018
  • (2017)Inferring Intent and Action from Gaze in Naturalistic BehaviorInternational Journal of Mobile Human Computer Interaction10.5555/3213394.32133989:4(41-57)Online publication date: 1-Oct-2017
  • (2017)Gaze movement-driven random forests for query clustering in automatic video annotationMultimedia Tools and Applications10.5555/3048787.304883776:2(2861-2889)Online publication date: 1-Jan-2017
  • Show More Cited By

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media