Eliminating Data Processing Bottlenecks in GNN Training over Large Graphs via Two-level Feature Compression
Abstract
References
Index Terms
- Eliminating Data Processing Bottlenecks in GNN Training over Large Graphs via Two-level Feature Compression
Recommendations
Auto-Divide GNN: Accelerating GNN Training with Subgraph Division
Euro-Par 2023: Parallel ProcessingAbstractGraph Neural Networks (GNNs) have gained considerable attention in recent years for their exceptional performance on graph-structured data. Sampling-based GNN training is the most common method used for training GNNs on large-scale graphs, and it ...
Practical prefetching via data compression
An important issue that affects response time performance in current OODB and hypertext systems is the I/O involved in moving objects from slow memory to cache. A promising way to tackle this problem is to use prefetching, in which we predict the user's ...
Comments
Information & Contributors
Information
Published In
Publisher
VLDB Endowment
Publication History
Check for updates
Badges
Qualifiers
- Research-article
Contributors
Other Metrics
Bibliometrics & Citations
Bibliometrics
Article Metrics
- 0Total Citations
- 176Total Downloads
- Downloads (Last 12 months)176
- Downloads (Last 6 weeks)39
Other Metrics
Citations
View Options
Login options
Check if you have access through your login credentials or your institution to get full access on this article.
Sign in