Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3663408.3665807acmotherconferencesArticle/Chapter ViewAbstractPublication PagescommConference Proceedingsconference-collections
poster

Programming Transport Layer with Galvatron

Published: 03 August 2024 Publication History
  • Get Citation Alerts
  • Abstract

    This paper introduces Galvatron, a domain-specific language designed to simplify transport layer programming. Galvatron obscures the underlying data structures and operations, and exposes parametric processing stages to the developer, thus significantly reducing coding effort. Our evaluation demonstrate Galvatron’s ability to encapsulate complex semantics of 4 real-world transport protocols using merely 2% of the code compared to native implementations.

    References

    [1]
    2022. PlatformLab/Homa. https://github.com/PlatformLab/Homa.
    [2]
    2024. microsoft/msquic. https://github.com/microsoft/msquic.
    [3]
    Mina Tahmasbi Arashloo, Alexey Lavrov, Manya Ghobadi, Jennifer Rexford, David Walker, and David Wentzlaff. 2020. Enabling Programmable Transport Protocols in High-Speed NICs. In USENIX NSDI.
    [4]
    Serhat Arslan, Stephen Ibanez, Alex Mallery, Changhoon Kim, and Nick McKeown. 2021. NanoTransport: A Low-Latency, Programmable Transport Layer for NICs. In ACM SOSR.
    [5]
    Shawn Shuoshuo Chen, Weiyang Wang, Christopher Canel, Srinivasan Seshan, Alex C. Snoeren, and Peter Steenkiste. 2022. Time-division TCP for reconfigurable data center networks. In ACM SIGCOMM 2022.
    [6]
    EunYoung Jeong, Shinae Wood, Muhammad Jamshed, Haewon Jeong, Sunghwan Ihm, Dongsu Han, and KyoungSoo Park. 2014. mTCP: a Highly Scalable User-level TCP Stack for Multicore Systems. In USENIX NSDI.
    [7]
    Akshay Narayan, Frank Cangialosi, Deepti Raghavan, Prateesh Goyal, Srinivas Narayana, Radhika Mittal, Mohammad Alizadeh, and Hari Balakrishnan. 2018. Restructuring endpoint congestion control. In ACM SIGCOMM.
    [8]
    Amedeo Sapio, Marco Canini, Chen-Yu Ho, Jacob Nelson, Panos Kalnis, Changhoon Kim, Arvind Krishnamurthy, Masoud Moshref, Dan Ports, and Peter Richtarik. 2021. Scaling Distributed Machine Learning with In-Network Aggregation. In USENIX NSDI.

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Other conferences
    APNet '24: Proceedings of the 8th Asia-Pacific Workshop on Networking
    August 2024
    230 pages
    ISBN:9798400717581
    DOI:10.1145/3663408
    Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 03 August 2024

    Check for updates

    Qualifiers

    • Poster
    • Research
    • Refereed limited

    Funding Sources

    Conference

    APNet 2024

    Acceptance Rates

    APNet '24 Paper Acceptance Rate 50 of 118 submissions, 42%;
    Overall Acceptance Rate 50 of 118 submissions, 42%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • 0
      Total Citations
    • 6
      Total Downloads
    • Downloads (Last 12 months)6
    • Downloads (Last 6 weeks)6
    Reflects downloads up to 10 Aug 2024

    Other Metrics

    Citations

    View Options

    Get Access

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media