Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/2856767.2856803acmconferencesArticle/Chapter ViewAbstractPublication PagesiuiConference Proceedingsconference-collections
short-paper

Interactive Intent Modeling from Multiple Feedback Domains

Published: 07 March 2016 Publication History
  • Get Citation Alerts
  • Abstract

    In exploratory search, the user starts with an uncertain information need and provides relevance feedback to the system's suggestions to direct the search. The search system learns the user intent based on this feedback and employs it to recommend novel results. However, the amount of user feedback is very limited compared to the size of the information space to be explored. To tackle this problem, we take into account user feedback on both the retrieved items (documents) and their features (keywords). In order to combine feedback from multiple domains, we introduce a coupled multi-armed bandits algorithm, which employs a probabilistic model of the relationship between the domains. Simulation results show that with multi-domain feedback, the search system can find the relevant items in fewer iterations than with only one domain. A preliminary user study indicates improvement in user satisfaction and quality of retrieved information.

    References

    [1]
    Agrawal, S., and Goyal, N. Thompson Sampling for Contextual Bandits with Linear Payoffs. In Proc. of ICML, (2013), 127--135.
    [2]
    Auer, P. Using confidence bounds for exploitationexploration trade-offs. The Journal of Machine Learning Research, (2003), 397--422.
    [3]
    Barral, O., Eugster, M. J., Ruotsalo, T., Spapé, M. M., Kosunen, I., Ravaja, N., ... and Jacucci, G. Exploring Peripheral Physiology as a Predictor of Perceived Relevance in Information Retrieval. In Proc. of IUI, ACM (2015), 389--399.
    [4]
    Brochu, E., Cora, V. M., and De Freitas, N. A tutorial on Bayesian optimization of expensive cost functions, with application to active user modeling and hierarchical reinforcement learning. arXiv preprint arXiv:1012.2599, 2010.
    [5]
    Brooke, J. SUS-A quick and dirty usability scale. Usability evaluation in industry 189, (1996), 4--7.
    [6]
    Bubeck, S., and Cesa-Bianchi, N. Regret analysis of stochastic and nonstochastic multi-armed bandit problems. Foundations and Trends in Machine Learning, (2012), 5(1), 1--122.
    [7]
    Chapelle, O., and Li, L. An empirical evaluation of thompson sampling. In Advances in Neural Information Processing Systems, (2011), 2249--2257.
    [8]
    Glowacka, D., Ruotsalo, T., Konuyshkova, K., Kaski, S., and Jacucci, G. Directing exploratory search: Reinforcement learning from user interactions with keywords. In Proc. of IUI, ACM (2013), 117--128.
    [9]
    Hoffman, M. D., Shahriari, B., and de Freitas, N. On correlation and budget constraints in model-based bandit optimization with application to automatic machine learning. In Proc. of AISTATS, (2014), 365--374.
    [10]
    Kammerer, Y., Nairn, R., Pirolli, P. and Chi, E.H. Signpost from the masses: learning effects in an exploratory social tag search browser. In Proc. of the SIGCHI Conference on Human Factors in Computing Systems, ACM (2009), 625--634.
    [11]
    Kim, J. Describing and predicting information-seeking behavior on the web. Journal of the American Society for Information Science and Technology, (2009), 679-- 693.
    [12]
    Marchionini, G. Exploratory search: from finding to understanding. Communications of the ACM (2006), 41--46.
    [13]
    Pu, P., Chen, L., and Hu, R. A user-centric evaluation framework for recommender systems. In Proc. of RecSys, ACM (2011), 157--164.
    [14]
    Ruotsalo, T., Jacucci, G., Myllymäki, P. and Kaski, S. Interactive intent modeling: Information discovery beyond search. Communications of the ACM, (2014), 58(1), 86--92.
    [15]
    Slivkins, A., Radlinski, F., and Gollapudi, S. Ranked bandits in metric spaces: learning diverse rankings over large document collections. The Journal of Machine Learning Research, (2013), 14(1), 399--436.
    [16]
    Srinivas, N., Krause, A., Kakade, S. M., and Seeger, M. Gaussian process optimization in the bandit setting: No regret and experimental design. In Proc. of ICML, (2010).
    [17]
    Teevan, J., Alvarado, C., Ackerman, M. S., and Karger, D. R. The perfect search engine is not enough: a study of orienteering behavior in directed search. In Proc. of CHI, ACM (2004), 415--422.
    [18]
    White, R. W., and Roth, R. A. Exploratory search: Beyond the query-response paradigm. Synthesis Lectures on Information Concepts, Retrieval, and Services, (2009), 1--98.
    [19]
    Wildemuth, B. M., and Freund, L. Assigning search tasks designed to elicit exploratory search behaviors. In Proc. of HCIR. ACM (2012), 1--10.

    Cited By

    View all
    • (2024)On the Negative Perception of Cross-domain Recommendations and ExplanationsProceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval10.1145/3626772.3657735(2102-2113)Online publication date: 10-Jul-2024
    • (2023)User Feedback-based Online Learning for Intent ClassificationProceedings of the 25th International Conference on Multimodal Interaction10.1145/3577190.3614137(613-621)Online publication date: 9-Oct-2023
    • (2022)EntityBot: Actionable Entity Recommendations for Everyday Digital TaskExtended Abstracts of the 2022 CHI Conference on Human Factors in Computing Systems10.1145/3491101.3519910(1-4)Online publication date: 27-Apr-2022
    • Show More Cited By

    Index Terms

    1. Interactive Intent Modeling from Multiple Feedback Domains

        Recommendations

        Comments

        Information & Contributors

        Information

        Published In

        cover image ACM Conferences
        IUI '16: Proceedings of the 21st International Conference on Intelligent User Interfaces
        March 2016
        446 pages
        ISBN:9781450341370
        DOI:10.1145/2856767
        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Sponsors

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        Published: 07 March 2016

        Permissions

        Request permissions for this article.

        Check for updates

        Author Tags

        1. exploratory search
        2. intent modeling
        3. multi-armed bandits
        4. probabilistic user models
        5. relevance feedback

        Qualifiers

        • Short-paper

        Funding Sources

        • Re:Know funded by TEKES
        • European Union in the Seventh Framework Programme

        Conference

        IUI'16
        Sponsor:

        Acceptance Rates

        IUI '16 Paper Acceptance Rate 49 of 194 submissions, 25%;
        Overall Acceptance Rate 746 of 2,811 submissions, 27%

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • Downloads (Last 12 months)6
        • Downloads (Last 6 weeks)2
        Reflects downloads up to 27 Jul 2024

        Other Metrics

        Citations

        Cited By

        View all
        • (2024)On the Negative Perception of Cross-domain Recommendations and ExplanationsProceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval10.1145/3626772.3657735(2102-2113)Online publication date: 10-Jul-2024
        • (2023)User Feedback-based Online Learning for Intent ClassificationProceedings of the 25th International Conference on Multimodal Interaction10.1145/3577190.3614137(613-621)Online publication date: 9-Oct-2023
        • (2022)EntityBot: Actionable Entity Recommendations for Everyday Digital TaskExtended Abstracts of the 2022 CHI Conference on Human Factors in Computing Systems10.1145/3491101.3519910(1-4)Online publication date: 27-Apr-2022
        • (2021)EntityBot: Supporting Everyday Digital Tasks with Entity RecommendationsProceedings of the 15th ACM Conference on Recommender Systems10.1145/3460231.3478883(753-756)Online publication date: 13-Sep-2021
        • (2021)Exploratory Search of GANs with Contextual BanditsProceedings of the 30th ACM International Conference on Information & Knowledge Management10.1145/3459637.3482103(3157-3161)Online publication date: 26-Oct-2021
        • (2021)Entity Recommendation for Everyday Digital TasksACM Transactions on Computer-Human Interaction10.1145/345891928:5(1-41)Online publication date: 20-Aug-2021
        • (2020)Introduction to Bandits in Recommender SystemsProceedings of the 14th ACM Conference on Recommender Systems10.1145/3383313.3411547(748-750)Online publication date: 22-Sep-2020
        • (2020)Human Strategic Steering Improves Performance of Interactive OptimizationProceedings of the 28th ACM Conference on User Modeling, Adaptation and Personalization10.1145/3340631.3394883(293-297)Online publication date: 7-Jul-2020
        • (2019)Bandit algorithms in recommender systemsProceedings of the 13th ACM Conference on Recommender Systems10.1145/3298689.3346956(574-575)Online publication date: 10-Sep-2019
        • (2019)May AI?Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems10.1145/3290605.3300863(1-12)Online publication date: 2-May-2019
        • Show More Cited By

        View Options

        Get Access

        Login options

        View options

        PDF

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        Media

        Figures

        Other

        Tables

        Share

        Share

        Share this Publication link

        Share on social media