Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.5555/3524938.3524967guideproceedingsArticle/Chapter ViewAbstractPublication PagesConference Proceedingsacm-pubtype
research-article
Free access

Customizing ML predictions for online algorithms

Published: 13 July 2020 Publication History
  • Get Citation Alerts
  • Abstract

    A popular line of recent research incorporates ML advice in the design of online algorithms to improve their performance in typical instances. These papers treat the ML algorithm as a black-box, and redesign online algorithms to take advantage of ML predictions. In this paper, we ask the complementary question: can we redesign ML algorithms to provide better predictions for on-line algorithms? We explore this question in the context of the classic rent-or-buy problem, and show that incorporating optimization benchmarks in ML loss functions leads to signifcantly better performance, while maintaining a worst-case adversarial result when the advice is completely wrong. We support this fnding both through theoretical bounds and numerical simulations.

    References

    [1]
    Ailon, N., Chazelle, B., Clarkson, K. L., Liu, D., Mulzer, W., and Seshadhri, C. Self-improving algorithms. SIAM Journal on Computing, 40(2):350-375, 2011.
    [2]
    Awasthi, P., Balcan, M. F., and Long, P. M. The power of localization for effciently learning linear separators with noise. In Proceedings of the forty-sixth annual ACM symposium on Theory of computing, pp. 449-458. ACM, 2014.
    [3]
    Balkanski, E., Rubinstein, A., and Singer, Y. The power of optimization from samples. In Lee, D. D., Sugiyama, M., von Luxburg, U., Guyon, I., and Garnett, R. (eds.), Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, December 5-10, 2016, Barcelona, Spain, pp. 4017-4025, 2016.
    [4]
    Balkanski, E., Rubinstein, A., and Singer, Y. The limitations of optimization from samples. In Hatami, H., McKenzie, P., and King, V. (eds.), Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing, STOC 2017, Montreal, QC, Canada, June 19-23, 2017, pp. 1016-1027. ACM, 2017.
    [5]
    Blum, A., Frieze, A., Kannan, R., and Vempala, S. A polynomial-time algorithm for learning noisy linear threshold functions. Algorithmica, 22(1-2):35-52, 1998.
    [6]
    Bylander, T. Learning linear threshold functions in the presence of classifcation noise. In Proceedings of the seventh annual conference on Computational learning theory, pp. 340-347, 1994.
    [7]
    Cole, R. and Roughgarden, T. The sample complexity of revenue maximization. In Shmoys, D. B. (ed.), Symposium on Theory of Computing, STOC 2014, New York, NY, USA, May 31 -June 03, 2014, pp. 243-252. ACM, 2014.
    [8]
    Elkan, C. The foundations of cost-sensitive learning. In International joint conference on artifcial intelligence, volume 17, pp. 973-978. Lawrence Erlbaum Associates Ltd, 2001.
    [9]
    Gollapudi, S. and Panigrahi, D. Online algorithms for rentor-buy with expert advice. In Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, pp. 2319-2327, 2019.
    [10]
    Gupta, R. and Roughgarden, T. A pac approach to application-specifc algorithm selection. SIAM Journal on Computing, 46(3):992-1017, 2017.
    [11]
    Hsu, C.-Y., Indyk, P., Katabi, D., and Vakilian, A. Learning-based frequency estimation algorithms. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum? id=r1lohoCqY7.
    [12]
    Huang, C., Zhai, S., Talbott, W., Bautista, M. A., Sun, S.-Y., Guestrin, C., and Susskind, J. Addressing the loss-metric mismatch with adaptive loss alignment. arXiv preprint arXiv:1905.05895, 2019.
    [13]
    Jiang, Z., Panigrahi, D., and Sun, K. Online algorithms for weighted caching with predictions. In 47th International Colloquium on Automata, Languages, and Programming, ICALP 2020, 2020.
    [14]
    Kamalaruban, P. and Williamson, R. C. Minimax lower bounds for cost sensitive classifcation. arXiv preprint arXiv:1805.07723, 2018.
    [15]
    Karlin, A. R., Manasse, M. S., Rudolph, L., and Sleator, D. D. Competitive snoopy caching. Algorithmica, 3: 77-119, 1988.
    [16]
    Karlin, A. R., Manasse, M. S., McGeoch, L. A., and Owicki, S. Competitive randomized algorithms for nonuniform problems. Algorithmica, 11(6):542-571, 1994.
    [17]
    Karlin, A. R., Kenyon, C., and Randall, D. Dynamic TCP acknowledgment and other stories about e/(e-1). Algorithmica, 36(3):209-224, 2003.
    [18]
    Kearns, M. J. and Vazirani, U. V. An introduction to computational learning theory. MIT press, 1994.
    [19]
    Khanafer, A., Kodialam, M., and Puttaswamy, K. P. N. The constrained ski-rental problem and its application to on-line cloud cost optimization. In Proceedings of the INFOCOM, pp. 1492-1500, 2013.
    [20]
    Kodialam, R. Competitive algorithms for an online rent or buy problem with variable demand. SIAM Undergraduate Research Online, 7:233-245, 2014.
    [21]
    Lattanzi, S., Lavastida, T., Moseley, B., and Vassilvitskii, S. Online scheduling via learned weights. In Chawla, S. (ed.), Proceedings of the 2020 ACM-SIAM Symposium on Discrete Algorithms, SODA 2020, Salt Lake City, UT, USA, January 5-8, 2020, pp. 1859-1877. SIAM, 2020.
    [22]
    Ling, C. X. and Sheng, V. S. Cost-sensitive learning and the class imbalance problem, 2008.
    [23]
    Lotker, Z., Patt-Shamir, B., and Rawitz, D. Rent, lease or buy: Randomized algorithms for multislope ski rental. In Proceedings of the 25th Annual Symposium on the Theoretical Aspects of Computer Science (STACS), pp. 503-514, 2008.
    [24]
    Lykouris, T. and Vassilvitskii, S. Competitive caching with machine learned advice. arXiv preprint arXiv:1802.05399, 2018.
    [25]
    Medina, A. M. and Vassilvitskii, S. Revenue optimization with approximate bid predictions. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 4-9 December 2017, Long Beach, CA, USA, pp. 1858-1866, 2017.
    [26]
    Meyerson, A. The parking permit problem. In Proc. of 46th Annual IEEE Symposium on Foundations of Computer Science (FOCS), pp. 274-284, 2005.
    [27]
    Mitzenmacher, M. A model for learned bloom flters and optimizing by sandwiching. In Advances in Neural Information Processing Systems, pp. 464-473, 2018.
    [28]
    Mitzenmacher, M. Scheduling with predictions and the price of misprediction. In Vidick, T. (ed.), 11th Innova-tions in Theoretical Computer Science Conference, ITCS 2020, January 12-14, 2020, Seattle, Washington, USA, volume 151 of LIPIcs, pp. 14:1-14:18. Schloss Dagstuhl -Leibniz-Zentrum fur Informatik, 2020.
    [29]
    Morgenstern, J. and Roughgarden, T. Learning simple auctions. In Feldman, V., Rakhlin, A., and Shamir, O. (eds.), Proceedings of the 29th Conference on Learning Theory, COLT 2016, New York, USA, June 23-26, 2016, volume 49 of JMLR Workshop and Conference Proceedings, pp. 1298-1318. JMLR.org, 2016.
    [30]
    Motwani, R. and Raghavan, P. Randomized Algorithms. Cambridge University Press, 1997.
    [31]
    Natarajan, N., Dhillon, I. S., Ravikumar, P. K., and Tewari, A. Learning with noisy labels. In Advances in neural information processing systems, pp. 1196-1204, 2013.
    [32]
    Purohit, M., Svitkina, Z., and Kumar, R. Improving online algorithms via ml predictions. In Advances in Neural Information Processing Systems, pp. 9661-9670, 2018.
    [33]
    Rasmussen, C. E. Gaussian processes in machine learning. In Summer School on Machine Learning, pp. 63-71. Springer, 2003.
    [34]
    Rohatgi, D. Near-optimal bounds for online caching with machine learned advice. In Chawla, S. (ed.), Proceedings of the 2020 ACM-SIAM Symposium on Discrete Algorithms, SODA 2020, Salt Lake City, UT, USA, January 5-8, 2020, pp. 1834-1845. SIAM, 2020.
    [35]
    Vapnik, V. and Vapnik, V. Statistical learning theory wiley. New York, pp. 156-160, 1998.

    Cited By

    View all
    • (2023)Learning-augmented private algorithms for multiple quantile releaseProceedings of the 40th International Conference on Machine Learning10.5555/3618408.3619078(16344-16376)Online publication date: 23-Jul-2023
    • (2022)Learning predictions for algorithms with predictionsProceedings of the 36th International Conference on Neural Information Processing Systems10.5555/3600270.3600526(3542-3555)Online publication date: 28-Nov-2022

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image Guide Proceedings
    ICML'20: Proceedings of the 37th International Conference on Machine Learning
    July 2020
    11702 pages

    Publisher

    JMLR.org

    Publication History

    Published: 13 July 2020

    Qualifiers

    • Research-article
    • Research
    • Refereed limited

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)28
    • Downloads (Last 6 weeks)7
    Reflects downloads up to 12 Aug 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2023)Learning-augmented private algorithms for multiple quantile releaseProceedings of the 40th International Conference on Machine Learning10.5555/3618408.3619078(16344-16376)Online publication date: 23-Jul-2023
    • (2022)Learning predictions for algorithms with predictionsProceedings of the 36th International Conference on Neural Information Processing Systems10.5555/3600270.3600526(3542-3555)Online publication date: 28-Nov-2022

    View Options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Get Access

    Login options

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media