Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3404835.3462785acmconferencesArticle/Chapter ViewAbstractPublication PagesirConference Proceedingsconference-collections
short-paper

FedNLP: An Interpretable NLP System to Decode Federal Reserve Communications

Published: 11 July 2021 Publication History
  • Get Citation Alerts
  • Abstract

    The Federal Reserve System (the Fed) plays a significant role in affecting monetary policy and financial conditions worldwide. Although it is important to analyse the Fed's communications to extract useful information, it is generally long-form and complex due to the ambiguous and esoteric nature of content. In this paper, we present FedNLP, an interpretable multi-component Natural Language Processing (NLP) system to decode Federal Reserve communications. This system is designed for end-users to explore how NLP techniques can assist their holistic understanding of the Fed's communications with NO coding. Behind the scenes, FedNLP uses multiple NLP models from traditional machine learning algorithms to deep neural network architectures in each downstream task. The demonstration shows multiple results at once including sentiment analysis, summary of the document, prediction of the Federal Funds Rate movement and visualization for interpreting the prediction model's result. Our application system and demonstration are available at https://fednlp.net.

    Supplementary Material

    MP4 File (SIGIR21_de1893_FedNLP_caption.mp4)
    SIGIR 2021 Demo paper presentation by Jean Lee, with the title "FedNLP: An interpretable NLP System to Decode Federal Reserve Communications". This video presents a background, system design and implementation, and a demonstration walkthrough with a few examples. The demo presents ?FedNLP?, a system designed for end-users to explore how NLP techniques can assist their holistic understanding of the Fed?s communications with NO coding. Our application system and demonstration are available at https://fednlp.net.

    References

    [1]
    Saleema Amershi, Max Chickering, Steven M Drucker, Bongshin Lee, Patrice Simard, and Jina Suh. 2015. Modeltracker: Redesigning performance analysis tools for machine learning. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems. ACM, New York, NY, USA, 337--346.
    [2]
    Alan S Blinder, Michael Ehrmann, Marcel Fratzscher, Jakob De Haan, and David- Jan Jansen. 2008. Central bank communication and monetary policy: A survey of theory and evidence. Journal of economic literature 46, 4 (2008), 910--45.
    [3]
    TheFed Board of Governors of the Federal Reserve System. 2020. What economic goals does the Federal Reserve seek to achieve through its monetary policy? Retrieved October 10, 2020 from https://www.federalreserve.gov/faqs/what-economic-goals-does-federalreserve- seek-to-achieve-through-monetary-policy.htm
    [4]
    TheFed Board of Governors of the Federal Reserve System. 2020. Workshop on the Use of Natural Language Processing in Supervision. Retrieved October 16, 2020 from https://www.federalreserve.gov/conferences/workshop-on-the-useof-natural-language-in-supervision.htm
    [5]
    Tianqi Chen and Carlos Guestrin. 2016. Xgboost: A scalable tree boosting system. In Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining. ACM, New York, NY, USA, 785--794.
    [6]
    Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019. Association for Computational Linguistics, Minneapolis, Minnesota, USA, 4171--4186.
    [7]
    The Economic Times. 2019. Decoding central bankers' language. Retrieved October 10, 2020 from https://economictimes.indiatimes.com/markets/stocks/news/decoding-central-bankers-language/articleshow/69952364.cms
    [8]
    Bernd Hayo and Matthias Neuenkirch. 2010. Do Federal Reserve communications help predict federal funds target rate decisions? Journal of Macroeconomics 32, 4 (2010), 1014--1024.
    [9]
    Benjamin Hoover, Hendrik Strobelt, and Sebastian Gehrmann. 2020. exBERT: A Visual Analysis Tool to Explore Learned Representations in Transformer Models. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations. Association for Computational Linguistics, Online, 187--196.
    [10]
    Sarfaraz Javed, B Atallah Aldalaien, Uvesh Husain, and M Shahfaraz Khan. 2019. Impact of Federal Funds Rate on Monthly Stocks Return of United States of America. International Journal of Business and Management 14, 9 (2019), 105.
    [11]
    Alexander Jung. 2016. Have minutes helped to predict fed funds rate changes? Journal of Macroeconomics 49 (2016), 18--32.
    [12]
    Josua Krause, Adam Perer, and Kenney Ng. 2016. Interacting with predictions: Visual inspection of black-box machine learning models. In Proceedings of the 2016 CHI conference on human factors in computing systems. ACM, San Jose, CA, USA, 5686--5697.
    [13]
    Steven Loria. 2017. TextBlob: Simplified Text Processing. Retrieved October 10, 2020 from https://textblob.readthedocs.io/en/dev/
    [14]
    Tim Loughran and Bill McDonald. 2011. When is a liability not a liability? Textual analysis, dictionaries, and 10-Ks. The Journal of finance 66, 1 (2011), 35--65.
    [15]
    Rada Mihalcea and Paul Tarau. 2004. Textrank: Bringing order into text. In Proceedings of the 2004 conference on empirical methods in natural language processing. ACL, Barcelona, Spain, 404--411.
    [16]
    Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer. Journal of Machine Learning Research 21 (2020), 1--67.
    [17]
    Radim Rehurek and Petr Sojka. 2010. Software framework for topic modelling with large corpora. In In Proceedings of the LREC 2010 workshop on new challenges for NLP frameworks. Citeseer, European Language Resources Association (ELRA), Valletta, Malta, 46--50.
    [18]
    Marco Túlio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. "Why Should I Trust You?": Explaining the Predictions of Any Classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, San Francisco, CA, USA, 1135--1144.
    [19]
    Evan A Schnidman and William D MacMillan. 2016. How the Fed Moves Markets: Central Bank Analysis for the Modern Era. Springer, New York, USA.
    [20]
    Hendrik Strobelt, Sebastian Gehrmann, Michael Behrisch, Adam Perer, Hanspeter Pfister, and Alexander M Rush. 2018. S eq 2s eq-v is: A visual debugging tool for sequence-to-sequence models. IEEE transactions on visualization and computer graphics 25, 1 (2018), 353--363.
    [21]
    Hendrik Strobelt, Sebastian Gehrmann, Hanspeter Pfister, and Alexander M Rush. 2017. Lstmvis: A tool for visual analysis of hidden state dynamics in recurrent neural networks. IEEE transactions on visualization and computer graphics 24, 1 (2017), 667--676.
    [22]
    Ian Tenney, James Wexler, Jasmijn Bastings, Tolga Bolukbasi, Andy Coenen, Sebastian Gehrmann, Ellen Jiang, Mahima Pushkarna, Carey Radebaugh, Emily Reif, et al. 2020. The Language Interpretability Tool: Extensible, Interactive Visualizations and Analysis for NLP Models. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations. Association for Computational Linguistics, Online, 107--118.
    [23]
    Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is All you Need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017. Curran Associates, Inc., Long Beach, CA, USA, 5998--6008.
    [24]
    Jesse Vig and Yonatan Belinkov. 2019. Analyzing the Structure of Attention in a Transformer Language Model. In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP. ACL, Florence, Italy, 63--76.
    [25]
    JamesWexler, Mahima Pushkarna, Tolga Bolukbasi, MartinWattenberg, Fernanda Viégas, and Jimbo Wilson. 2019. The what-if tool: Interactive probing of machine learning models. IEEE transactions on visualization and computer graphics 26, 1 (2019), 56--65.
    [26]
    Yi Yang, Mark Christopher Siy UY, and Allen Huang. 2020. Finbert: A pretrained language model for financial communications. arXiv preprint arXiv:2006.08097 (2020).
    [27]
    Jiawei Zhang, Yang Wang, Piero Molino, Lezhi Li, and David S Ebert. 2018. Manifold: A model-agnostic framework for interpretation and diagnosis of machine learning models. IEEE transactions on visualization and computer graphics 25, 1 (2018), 364--373.

    Cited By

    View all
    • (2023)FLUEnT: Financial Language Understandability Enhancement ToolkitProceedings of the 6th Joint International Conference on Data Science & Management of Data (10th ACM IKDD CODS and 28th COMAD)10.1145/3570991.3571067(258-262)Online publication date: 4-Jan-2023
    • (2023)Applications of Explainable Artificial Intelligence in Finance—a systematic review of Finance, Information Systems, and Computer Science literatureManagement Review Quarterly10.1007/s11301-023-00320-074:2(867-907)Online publication date: 28-Feb-2023

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    SIGIR '21: Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval
    July 2021
    2998 pages
    ISBN:9781450380379
    DOI:10.1145/3404835
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 11 July 2021

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. AI application
    2. federal funds rate forecasting
    3. federal reserve
    4. interpretable machine learning
    5. text analysis

    Qualifiers

    • Short-paper

    Conference

    SIGIR '21
    Sponsor:

    Acceptance Rates

    Overall Acceptance Rate 792 of 3,983 submissions, 20%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)42
    • Downloads (Last 6 weeks)2
    Reflects downloads up to 26 Jul 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2023)FLUEnT: Financial Language Understandability Enhancement ToolkitProceedings of the 6th Joint International Conference on Data Science & Management of Data (10th ACM IKDD CODS and 28th COMAD)10.1145/3570991.3571067(258-262)Online publication date: 4-Jan-2023
    • (2023)Applications of Explainable Artificial Intelligence in Finance—a systematic review of Finance, Information Systems, and Computer Science literatureManagement Review Quarterly10.1007/s11301-023-00320-074:2(867-907)Online publication date: 28-Feb-2023

    View Options

    Get Access

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media