Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Graph Representation Learning

  • Book
  • © 2020

Overview

This is a preview of subscription content, log in via an institution to check access.

Access this book

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

eBook USD 44.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book USD 59.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Other ways to access

Licence this eBook for your library

Institutional subscriptions

About this book

Graph-structured data is ubiquitous throughout the natural and social sciences, from telecommunication networks to quantum chemistry. Building relational inductive biases into deep learning architectures is crucial for creating systems that can learn, reason, and generalize from this kind of data. Recent years have seen a surge in research on graph representation learning, including techniques for deep graph embeddings, generalizations of convolutional neural networks to graph-structured data, and neural message-passing approaches inspired by belief propagation. These advances in graph representation learning have led to new state-of-the-art results in numerous domains, including chemical synthesis, 3D vision, recommender systems, question answering, and social network analysis.

This book provides a synthesis and overview of graph representation learning. It begins with a discussion of the goals of graph representation learning as well as key methodological foundations in graph theory and network analysis. Following this, the book introduces and reviews methods for learning node embeddings, including random-walk-based methods and applications to knowledge graphs. It then provides a technical synthesis and introduction to the highly successful graph neural network (GNN) formalism, which has become a dominant and fast-growing paradigm for deep learning with graph data. The book concludes with a synthesis of recent advancements in deep generative models for graphs—a nascent but quickly growing subset of graph representation learning.

Similar content being viewed by others

Table of contents (10 chapters)

  1. Node Embeddings

  2. Graph Neural Networks

  3. Generative Graph Models

Authors and Affiliations

  • McGill University, Canada

    William L. Hamilton

  • Mila-Quebec Artificial Intelligence Institute, Canada

    William L. Hamilton

About the author

William L. Hamilton is an Assistant Professor of Computer Science at McGill University and a Canada CIFAR Chair in AI. His research focuses on graph representation learning as well as applications in computational social science and biology. In recent years, he has published more than 20 papers on graph representation learning at top-tier venues across machine learning and network science, as well as co-organized several large workshops and tutorials on the topic. Williams work has been recognized by several awards, including the 2018 Arthur L. Samuel Thesis Award for the best doctoral thesis in the Computer Science department at Stanford University and the 2017 Cozzarelli Best Paper Award from the Proceedings of the National Academy of Sciences.

Bibliographic Information

Publish with us