Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Dyngem: Deep Embedding Method For Dynamic Graphs

Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

DynGEM: Deep Embedding Method for Dynamic Graphs

Palash Goyal∗ , Nitin Kamra∗ , Xinran He, Yan Liu


University of Southern California
{palashgo, nkamra, xinranhe, yanliu.cs}@usc.edu

Abstract SVD based models [Belkin and Niyogi, 2001; Roweis and
Saul, 2000; Tenenbaum et al., 2000; Cao et al., 2015; Ou et
Embedding large graphs in low dimensional spaces al., 2016], which decompose the Laplacian or high-order ad-
has recently attracted significant interest due to its jacency matrix to produce node embeddings. Others include
wide applications such as graph visualization, link Random-walk based models [Grover and Leskovec, 2016;
prediction and node classification. Existing meth- Perozzi et al., 2014] which create embeddings from local-
ods focus on computing the embedding for static ized random walks and many others [Tang et al., 2015;
graphs. However, many graphs in practical ap- Ahmed et al., 2013; Cao et al., 2016; Niepert et al., 2016].
plications are dynamic and evolve constantly over Recently, Wang et al. designed an innovative model, SDNE,
time. Naively applying existing embedding algo- which utilizes a deep autoencoder to handle non-linearity
rithms to each snapshot of dynamic graphs inde- to generate more accurate embeddings [Wang et al., 2016].
pendently usually leads to unsatisfactory perfor- Many other methods which handle attributed graphs and
mance in terms of stability, flexibility and effi- generate a unified embedding have also been proposed in
ciency. In this work, we present an efficient algo- the recent past [Chang et al., 2015; Huang et al., 2017a;
rithm DynGEM based on recent advances in deep 2017b].
autoencoders for graph embeddings, to address this
However, in practical applications, many graphs, such as
problem. The major advantages of DynGEM in-
social networks, are dynamic and evolve over time. For ex-
clude: (1) the embedding is stable over time, (2)
ample, new links are formed (when people make new friends)
it can handle growing dynamic graphs, and (3) it
and old links can disappear. Moreover, new nodes can be in-
has better running time than using static embedding
troduced into the graph (e.g., users can join the social net-
methods on each snapshot of a dynamic graph. We
work) and create new links to existing nodes. Usually, we
test DynGEM on a variety of tasks including graph
represent the dynamic graphs as a collection of snapshots of
visualization, graph reconstruction, link prediction
the graph at different time steps [Leskovec et al., 2007].
and anomaly detection (on both synthetic and real
Existing works which focus on dynamic embeddings of-
datasets). Experimental results demonstrate the su-
ten apply static embedding algorithms to each snapshot of
perior stability and scalability of our approach.
the dynamic graph and then rotationally align the resulting
static embeddings across time steps [Hamilton et al., 2016;
1 Introduction Kulkarni et al., 2015]. Naively applying existing static em-
Many important tasks in network analysis involve making bedding algorithms independently to each snapshot leads to
predictions over nodes and/or edges in a graph, which de- unsatisfactory performance due to the following challenges:
mands effective algorithms for extracting meaningful patterns
and constructing predictive features. Among the many at- • Stability: The embedding generated by static methods
tempts towards this goal, graph embedding, i.e., learning low- is not stable, i.e., the embedding of graphs at consecu-
dimensional representation for each node in the graph that tive time steps can differ substantially even though the
accurately captures its relationship to other nodes, has re- graphs do not change much.
cently attracted much attention. It has been demonstrated • Growing Graphs: New nodes can be introduced into
that graph embedding is superior to alternatives in many the graph and create new links to existing nodes as the
supervised learning tasks, such as node classification, link dynamic graph grows in time. All existing approaches
prediction and graph reconstruction [Ahmed et al., 2013; assume a fixed number of nodes in learning graph em-
Perozzi et al., 2014; Cao et al., 2015; Tang et al., 2015; beddings and thus cannot handle growing graphs.
Grover and Leskovec, 2016; Ou et al., 2016].
• Scalability: Learning embeddings independently for
Various approaches have been proposed for static graph
each snapshot leads to running time linear in the number
embedding [Goyal and Ferrara, 2017]. Examples include
of snapshots. As learning a single embedding is already

Nitin Kamra and Palash Goyal contribute equally to this article. computationally expensive, the naive approach does not
scale to dynamic networks with many snapshots. and T is the number of snapshots. We consider the setting
Other approaches have attempted to learn embedding of dy- with growing graphs i.e. Vt ⊆ Vt+1 , namely new nodes can
namic graphs by explicitly imposing a temporal regularizer to join the dynamic graph and create links to existing nodes.
ensure temporal smoothness over embeddings of consecutive We consider the deleted nodes as part of the graph with zero
snapshots [Zhu et al., 2016]. This approach fails for dynamic weights to the rest of the nodes. We assume no relationship
graphs where consecutive time steps can differ significantly, between Et and Et+1 and new edges can form between snap-
and hence cannot be used for applications like anomaly detec- shots while existing edges can disappear.
tion. Moreover, their approach is a Graph Factorization (ab- A dynamic graph embedding extends the concept of em-
breviated as GF hereafter) [Ahmed et al., 2013] based model, bedding to dynamic graphs. Given a dynamic graph G =
and DynGEM outperforms these models as shown by our ex- {G1 , · · · , GT }, a dynamic graph embedding is a time-series
periments in section 5. [Dai et al., 2017] learn embedding of mappings F = {f1 , · · · , fT } such that mapping ft is a
of dynamic graphs, although they focus on a bipartite graphs graph embedding for Gt and all mappings preserve the prox-
specifically for user-item interactions. imity measure for their respective graphs.
In this paper, we develop an efficient graph embedding al- A successful dynamic graph embedding algorithm should
gorithm, referred to as DynGEM, to generate stable embed- create stable embeddings over time. Intuitively, a stable dy-
dings of dynamic graphs. DynGEM employs a deep autoen- namic embedding is one in which consecutive embeddings
coder at its core and leverages the recent advances in deep differ only by small amounts if the underlying graphs change
learning to generate highly non-linear embeddings. Instead of a little i.e. if Gt+1 does not differ from Gt a lot, the em-
learning the embedding of each snapshot from scratch, Dyn- bedding outputs Yt+1 = ft+1 (Gt+1 ) and Yt = ft (Gt ) also
GEM incrementally builds the embedding of snapshot at time change only by a small amount.
t from the embedding of snapshot at time t − 1. Specifically, To be more specific, let St (Ṽ ) be the weighted adjacency
we initialize the embedding from previous time step, and then matrix of the induced subgraph of node set Ṽ ⊆ Vt and
carry out gradient training. This approach not only ensures Ft (Ṽ ) ∈ R|Ṽ |×d be the embedding of all nodes in Ṽ ⊆ Vt of
stability of embeddings across time, but also leads to efficient snapshot t. We define the absolute stability as
training as all embeddings after the first time step require
very few iterations to converge. To handle dynamic graphs kFt+1 (Vt ) − Ft (Vt )kF
Sabs (F; t) = .
with growing number of nodes, we incrementally grow the kSt+1 (Vt ) − St (Vt )kF
size of our neural network with our heuristic, PropSize, to In other words, the absolute stability of any embedding F is
dynamically determine the number of hidden units required the ratio of the difference between embeddings to that of the
for each snapshot. In addition to the proposed model, we also difference between adjacency matrices. Since this definition
introduce rigorous stability metrics for dynamic graph em- of stability depends on the sizes of the matrices involved, we
beddings. define another measure called relative stability which is in-
On both synthetic and real-world datasets, experiment re- variant to the size of adjacency and embedding matrices:
sults demonstrate that our approach achieves similar or bet- 
ter accuracy in graph reconstruction and link prediction more kFt+1 (Vt ) − Ft (Vt )kF kSt+1 (Vt ) − St (Vt )kF
Srel (F; t) = .
efficiently than existing static approaches. DynGEM is also kFt (Vt )kF kSt (Vt )kF
applicable for dynamic graph visualization and anomaly de-
We further define the stability constant:
tection, which are not feasible for many previous static em-
bedding approaches. KS (F) = max
0
|Srel (F ; τ ) − Srel (F ; τ 0 )|.
τ,τ

2 Definitions and Preliminaries We say that a dynamic embedding F is stable as long as it has
a small stability constant. Clearly, the smaller the KS (F) is,
We denote a weighted graph as G(V, E) where V is the vertex the more stable the embedding F is. In the experiments, we
set and E is the edge set. The weighted adjacency matrix of use the stability constant as the metric to compare the stability
G is denoted by S. If (u, v) ∈ E, we have sij > 0 denoting of our DynGEM algorithm to other baselines.
the weight of edge (u, v); otherwise we have sij = 0. We use
si = [si,1 , · · · , si,|V | ] to denote the i-th row of the adjacency
matrix.
3 DynGEM: Dynamic Graph Embedding
Given a graph G = (V, E), a graph embedding is a map- Model
ping f : V → Rd , namely yv = f (v) ∀v ∈ V . We require Recent advances in deep unsupervised learning have shown
that d  |V | and the function f preserves some proximity that autoencoders can successfully learn very complex low-
measure defined on the graph G. Intuitively, if two nodes u dimensional representations of data for various tasks [Bengio
and v are “similar” in graph G, their embedding yu and yv et al., 2013]. DynGEM uses a deep autoencoder to map the
should be close to each other in the embedding space. We use input data to a highly nonlinear latent space to capture the
the notation f (G) ∈ R|V |×d for the embedding matrix of all connectivity trends in a graph snapshot at any time step. The
nodes in the graph G. model is semi-supervised and minimizes a combination of
In this paper, we consider the problem of dynamic graph two objective functions corresponding to the first-order prox-
embedding. We represent a dynamic graph G as a series of imity and second-order proximity respectively. The autoen-
snapshots, i.e. G = {G1 , · · · , GT }, where Gt = (Vt , Et ) coder model is shown in Figure 1, and the terminology used
Symbol Definition 𝑳𝒈𝒍𝒐𝒃 𝑳𝒈𝒍𝒐𝒃
n number of vertices ෝ
𝒙𝒊 ෝ
𝒙𝒋
K number of layers ෝ𝒊
𝒚
𝟏
ෝ𝒋𝟏
𝒚
S = {s1 , · · · , sn } adjacency matrix of G
X = {xi }i∈[n] input data (𝑲) (𝑲)
𝒚𝒊 𝑳𝒍𝒐𝒄 𝒚𝒋
X̂ = {x̂i }i∈[n] reconstructed data 𝜃𝑡−1
𝜃𝑡 𝜃𝑡
(k) (k)
Y = {yi }i∈[n] hidden layers (𝟏)
𝒚𝒊
(𝟏)
𝒚𝒋
Y = Y (K) embedding 𝒔𝒊
𝒙𝒊 𝒙𝒋
θ = {W , Ŵ (k) , b(k) , b̂(k) }
(k)
weights, biases 𝒔𝒊 𝒔𝒋

Table 1: Notations for deep autoencoder


is in Table 1 (as in [Wang et al., 2016]). The symbols with
𝐺𝑡−1 𝐺𝑡
hat on top are for the decoder.
The neighborhood of a node vi is given by si ∈ Rn .
Figure 1: DynGEM: Dynamic Graph Embedding Model. The
For any pair of nodes vi and vj from graph Gt , the model
figure shows two snapshots of a dynamic graph and the cor-
takes their neighborhoods as input: xi = si and xj =
responding deep autoencoder at each step.
sj , and passes them through the autoencoder to generate d-
(K) (K)
dimensional embeddings yi = yi and yj = yj at the
outputs of the encoder. The decoder reconstructs the neigh- layer between two existing layers by making the new inter-
borhoods x̂i and x̂j , from embeddings yi and yj . mediate layer closely replicate the identity mapping. This can
be done for ReLU activations but not for sigmoid activations.
3.1 Handling growing graphs The combination of widening and deepening the autoen-
Handling dynamic graphs of growing sizes requires a good coder with PropSize, Net2WiderNet and Net2DeeperNet at
mechanism to expand the autoencoder model, while preserv- each time step, allows us to work with dynamic graphs with
ing weights from previous time steps of training. A key com- growing number of nodes over time and gives a remarkable
ponent is to decide how the number of hidden layers and the performance as shown by our experiments.
number of hidden units should grow as more nodes are added
to the graph. We propose a heuristic, PropSize, to compute 3.2 Loss function and training
new layer sizes for all layers which ensures that the sizes of To learn the model parameters, a weighted combination of
consecutive layers are within a certain factor of each other. three objectives is minimized at each time step:
PropSize: We propose this heuristic to compute the new
sizes of neural network layers at each time step and insert new Lnet = Lglob + αLloc + ν1 L1 + ν2 L2 ,
layers if needed. For the encoder, layer widths are computed where α, ν1 and ν2 are hyperparameters appropriately cho-
for each pair of consecutive layers (lk and lk+1 ), starting from sen as relative weights of the objective functions. Lloc =
its input layer (l1 = x) and first hidden layer (l2 = y (1) ) until n 2
P
i,j sij kyi − yj k2 is the first-order proximityP which corre-
the following condition is satisfied for each consecutive layer n
sponds to local structure of the graph. Lglob = i=1 k(x̂i −
pair:
xi ) bi k22 = k(X̂ − X) Bk2F is the second-order prox-
size(lk+1 ) ≥ ρ × size(lk ), imity which corresponds to global neighborhood of each
node in the graph and is preserved by an unsupervised
where 0 < ρ < 1 is a suitably chosen hyperparameter. If
reconstruction of the neighborhood of each node. bi is
the condition is not satisfied for any pair (lk , lk+1 ), the layer
a vector with bij = 1 if sij = 0 else bij = β >
width for lk+1 is increased to satisfy the heuristic. Note that
1. This penalizes inaccurate reconstruction of an observed
the size of the embedding layer y = y (K) is always kept edge eij more than that of unobserved edges. Regular-
fixed at d and never expanded. If the PropSize rule is not PK 
(k) (k) 1
satisfied at the penultimate and the embedding layer of the izers L1 = k=1 kW k1 + kŴ k1 and L2 =
encoder, we add more layers in between (with sizes satisfy- PK  
(k) 2
k=1 kW kF + kŴ (k) k2F are added to encourage spar-
ing the rule) till the rule is satisfied. This procedure is also
applied to the decoder layers starting from the output layer sity in the network weights and to prevent the model from
(x̂) and continuing inwards towards the embedding layer (or overfitting the graph structure respectively. DynGEM learns
ŷ = ŷ (K) ) to compute new layer sizes. the parameters θt of this deep autoencoder at each time step
(K)
After deciding the number of layers and the number of t, and uses Yt as the embedding output for graph Gt .
hidden units in each layer, we adopt Net2WiderNet and
Net2DeeperNet approaches from [Chen et al., 2015] to ex- 3.3 Stability by reusing previous step embedding
pand the deep autoencoder. Net2WiderNet allows us to widen For a dynamic graph G = {G1 , · · · , GT }, we train the deep
layers i.e. add more hidden units to an existing neural net- autoencoder model fully on G1 using random initialization
work layer, while approximately preserving the function be-
1
ing computed by that layer. Net2DeeperNet inserts a new kW k1 represents sum of absolute values of entries of W
Algorithm 1: Algorithm: DynGEM |V | |E| T
SYN 1,000 79,800-79,910 40
1: Input: Graphs Gt = (Vt , Et ), Gt−1 = (Vt−1 , Et−1 ), HEP-TH 1,424-7,980 2,556-21,036 60
Embedding Yt−1 , autoencoder weights θt−1 AS 7716 10,695-26,467 100
2: Output: Embedding Yt ENRON 184 63-591 128
3: From Gt , generate adjacency matrix S, penalty matrix B
4: Create the autoencoder model with initial architecture Table 2: Statistics of datasets. |V |, |E| and T denote the
5: Initialize θt randomly if t = 1, else θt = θt−1 number of nodes, edges and length of time series respectively.
6: if |Vt | > |Vt−1 | then
7: Compute new layer sizes with PropSize heuristic 4 Experiments
8: Expand autoencoder layers and/or insert new layers 4.1 Datasets
9: end if
10: Create set S = {(si , sj )} for each e = (vi , vj ) ∈ Et We evaluate the performance of our DynGEM on both syn-
11: for i = 1, 2, . . . do thetic and real-world dynamic graphs. The datasets are sum-
12: Sample a minibatch M from S marized in Table 2.
13: Compute gradients ∇θt Lnet of objective Lnet on M Synthetic Data (SYN): We generate synthetic dynamic
14: Do gradient update on θt with nesterov momentum graphs using Stochastic Block Model [Wang and Wong,
15: end for 1987]. The first snapshot of the dynamic graph is generated to
have three equal-sized communities with in-block probability
0.2 and cross-block probability 0.01. To generate subsequent
graphs, we randomly pick nodes at each time step and move
of parameters θ1 . For all subsequent time steps, we initialize them to another community. We use SYN to visualize the
our model with parameters θt from the previous time step pa- changes in embeddings as nodes change communities.
rameters θt−1 , before widening/deepening the model. This HEP-TH [Gehrke et al., 2003]: The original dataset con-
results in direct knowledge transfer of structure from ft−1 to tains abstracts of paper in High Energy Physics Theory con-
ft , so the model only needs to learn about changes between ference in the period from January 1993 to April 2003. For
Gt−1 and Gt . Hence the training converges very fast in a each month, we create a collaboration network using all pa-
few iterations for time steps: {2, · · · , T }. More importantly, pers published upto that month. We take the first five years
it guarantees stability by ensuring that embedding Yt stays data and generate a time series containing 60 graphs with
close to Yt−1 . Note that unlike [Zhu et al., 2016] we do not number of nodes increasing from 1, 424 to 7, 980.
impose explicit regularizers to keep the embeddings at time Autonomous Systems(AS) [Leskovec and Krevl, 2014]:
steps t − 1 and t close. Since if the graph snapshots at times This is a communication network of who-talks-to-whom from
t − 1 and t differ significantly, then so should the correspond- the BGP (Border Gateway Protocol) logs. The dataset con-
ing embeddings ft−1 and ft . Our results in section 5 show tains 733 instances spanning from November 8, 1997 to Jan-
the superior stability and faster runtime of our method over uary 2, 2000. For our evaluation, we consider a subset of this
other baselines. dataset which contains the first 100 snapshots.
ENRON [Klimt and Yang, 2004]: The dataset contains
3.4 Techniques for scalability emails between employees in Enron Inc. from Jan 1999 to
July 2002. We process the graph as done in [Park et al., 2009]
Previous deep autoencoder models [Wang et al., 2016] for by considering email communication only between top exec-
static graph embeddings use sigmoid activation function and utives for each week starting from Jan 1999.
are trained with stochastic gradient descent (SGD).
We use ReLU in all autoencoder layers to support weighted 4.2 Algorithms and Evaluation Metrics
graphs since ReLU can construct arbitrary positive entries of We compare the performance of the following dynamic em-
si . It also accelerates training, since the derivative of ReLU bedding algorithms on several tasks:
is straightforward to compute, whereas the derivative of sig-
moid requires computing exponentials [Glorot et al., 2011]. • SDNE 2 : We apply SDNE independently to each snap-
Lastly, ReLU allows gradients from both objectives Lloc and shot of the dynamic network.
Lglob to propagate effectively through the encoder and averts • [SDNE/GF]align : We first apply SDNE or GF algorithm
the vanishing gradient effect [Glorot et al., 2011] which we independently to each snapshot and rotate the embed-
observe for sigmoid. ding as in [Hamilton et al., 2016] for alignment.
We also use nesterov momentum [Sutskever et al., 2013] • GFinit : We apply GF algorithm whose embedding at
with properly tuned hyperparameters, which converges much time t is initialized from the embedding at time t − 1.
faster as opposed to using pure SGD. Lastly, we observed
• DynGEM: Our algorithm for dynamic graph embedding.
better performance on all tasks with a combination of L1-
We set the embedding dimension d = 20 for ENRON
norm and L2-norm regularizers.
and 100 for all other datasets. We use two hidden lay-
The pseducode of learning DynGEM model for a single ers in the deep autoencoder with initial sizes (later they
snapshot of the dynamic graph is shown in algorithm 1. The
pseduocode can be called repeatedly on each snapshot in the 2
We replaced the sigmoid activation in all our SDNE baselines
dynamic graph to generate the dynamic embedding. with ReLU activations for scalability and faster training times.
could expand) for each dataset as: ENRON = [100, 80], SYN HEP-TH AS ENRON
{HEP-TH, AS, SYN} = [500, 300]. The neural network GFalign 0.027 0.04 0.09 0.021
structures are chosen by an informal search over a set of GFinit 0.024 0.042 0.08 0.017
architectures. We set ρ = 0.3, step-size decay for SGD SDNEalign 0.031 0.17 0.1 0.06
= 10−5 , Momentum coefficient = 0.99. The other pa- SDNE 0.034 0.1 0.09 0.081
rameters are set via grid search with appropriate cross DynGEM 0.194 0.26 0.21 0.084
validation as follows: α ∈ [10−6 , 10−5 ], β ∈ [2, 5] and
ν1 ∈ [10−4 , 10−6 ] and ν2 ∈ [10−3 , 10−6 ]. Table 4: Average MAP of link prediction.
In our experiments, we evaluate the performance of above SYN HEP-TH AS ENRON
models on tasks of graph reconstruction, link prediction, em- SDNE 0.18 14.715 6.25 19.722
bedding stability and anomaly detection. For the first two SDNEalign 0.11 8.516 2.269 18.941
tasks, graph reconstruction and link prediction, we use Mean DynGEM 0.008 1.469 0.125 1.279
Average Precision (MAP) as our metric (see [Wang et al.,
2016] for definition). To evaluate the stability of the dynamic Table 5: Stability constants KS (F ) of embeddings on dy-
embedding, we use the stability constant KS (F) defined in namic graphs.
section 2. All experiments are performed on a Ubuntu 14.04.4
LTS system with 32 cores, 128 GB RAM and a clock speed
of 2.6 GHz. The GPU used is Nvidia Tesla K40C.

5 Results and Analysis


5.1 Graph Reconstruction
Embeddings as a good low-dimensional representations of a
graph are expected to accurately reconstruct the graph. We (a) DynGEM time step with 5 (b) DynGEM time step with 300
reconstruct the graph edges between pairs of vertices from the nodes jumping out of 1000 nodes jumping out of 1000
embeddings, using the decoder from our autoencoder model.
We rank the pairs of vertices according to their corresponding Figure 2: 2D visualization of 100-dimensional embeddings
reconstructed proximity. Then, we calculate the ratio of real for SYN dataset, when nodes change their communities over
links in top k pairs of vertices as the reconstruction precision. a time step. A point in any plot represents the embedding
of a node in the graph, with the color indicating the node
SYN HEP-TH AS ENRON community. Small (big) points are nodes which didn’t (did)
GFalign 0.119 0.49 0.164 0.223 change communities. Each big point is colored according to
GFinit 0.126 0.52 0.164 0.31 its final community color.
SDNEalign 0.124 0.04 0.07 0.141
SDNE 0.987 0.51 0.214 0.38
DynGEM 0.987 0.491 0.216 0.424 to obtain the precision scores. The prediction accuracy av-
eraged over t from 1 to T is shown in Table 4. We observe
Table 3: Average MAP of graph reconstruction. that DynGEM is able to predict missing edges better than the
baselines on all datasets. Since Graph Factorization based
The graph reconstruction MAP metric averaged over snap- approaches perform consistently worse than SDNE based ap-
shots on our datasets are shown in Table 3. The results show proach and our DynGEM algorithm, we only present results
that DynGEM outperforms all Graph Factorization based comparing our DynGEM to SDNE based algorithms for the
baselines except on HEP-TH where its performance is com- remaining tasks due to space constraints.
parable with the baselines.
5.3 Stability of Embedding Methods
5.2 Link Prediction Stability of the embeddings is crucial for tasks like anomaly
Another important application of graph embedding is link detection. We evaluate the stability of our model and com-
prediction which tests how well a model can predict unob- pare to other methods on four datasets in terms of stability
served edges. A good representation of the network should constants in Table 5. Our model substantially outperforms
not only be able to reconstruct the edges visible to it dur- other models and provides stable embeddings along with bet-
ing training but should also be able to predict edges which ter graph reconstruction performance (see table 3 in section
are likely but missing in the training data. To test this, we 5.1 for reconstruction errors at this stability). In the next sec-
randomly hide 15% of the network edges at time t (call it tion, we show that we can utilize this stability for visualiza-
Gthidden ). We train a dynamic embedding using snapshots tion and detecting anomalies in real dynamic networks.
{G1 , · · · , Gt−1 , G\Gthidden } and predict the hidden edges at
snapshot t. We predict the most likely (highest weighted) 5.4 Visualization
edges which are not in the observed set of edges as the hid- One important application of graph embedding is graph vi-
den edges. The predictions are then compared against Gthidden sualization. We carry out experiments on SYN dataset with
Week 93 Week 94
SYN HEP-TH AS ENRON
SDNEalign 56.6 min 71.4 min 210 min 7.69 min
DynGEM 13.8 min 25.4 min 80.2 min 3.48 min
Speedup 4.1 2.81 2.62 2.21
Speedupexp 4.8 3 3 3

Table 6: Computation time of embedding methods for the first


forty time steps on each dataset. Speedupexp is the expected
speedup based on model parameters.
T=5 T=10 T=20 T=40
SDNEalign 6.99 min 14.24 min 26.6 min 56.6 min
DynGEM 2.63 min 4.3 min 7.21 min 13.8 min
Speedup 2.66 3.31 3.69 4.1

Table 7: Computation time of embedding methods on SYN


while varying the length of time series T .

Week 101 events: Jeffrey Skilling took over as CEO in Feb 2001; Rove
divested his stocks in energy in June 2001 and CEO resigna-
tion and crime investigation by FBI began in Jan 2002. We
Figure 3: Results of anomaly detection on Enron and visu- also observe some peaks leading to each of these time frames
alization of embeddings for weeks 93, 94 and 101 on Enron which indicate the onset of these events. Figure 3 shows em-
dataset bedding visualizations around week 94. A spread out embed-
ding can be observed for weeks 93 and 101, corresponding
to low communication among employees. On the contrary,
known community structure. We apply t-SNE [Maaten and the volume of communication grew significantly in week 94
Hinton, 2008] to the embedding generated by DynGEM at (shown by the highly compact embedding).
each time step to plot the resulting 2D embeddings. To avoid
instability of visualization over time steps, we initialize t- 5.6 Effect of Layer Expansion
SNE with identical random state for all time steps.
Figure 2 illustrates the results for 2D visualization of We evaluate the effect of layer expansion on HEP-TH data
100-dimensional embeddings for SYN dataset, when nodes set. For this purpose, we run our model DynGEM, with and
change their communities over a single time step. The left without layer expansion. We observe that without layer ex-
(right) plot in each subfigure shows the embedding before pansion, the model achieves an average MAP of 0.46 and
(after) the nodes change their communities. A point in any 0.19 for graph reconstruction and link prediction respectively.
plot represents the embedding of a node in the graph with the Note that this is significantly lower than the performance
color indicating the node community. Small (big) points are of DynGEM with layer expansion which obtains 0.491 and
nodes which didn’t (did) change communities. Each big point 0.26 for the respective tasks. Also note that for SDNE and
is colored according to its final community color. SDNEalign , we select the best model at each time step. Us-
We observe that the DynGEM embeddings of the nodes ing PropSize heuristic obviates this need and automatically
which changed communities, follow the changes in commu- selects a good neural network size for subsequent time steps.
nity structure accurately without disturbing the embeddings
of other nodes, even when the fraction of such nodes is very
5.7 Scalability
high (see figure 2b where 30% nodes change communities). We now compare the time taken to learn different embed-
This strongly demonstrates the stability of our technique. ding models. From Table 6, we observe that DynGEM is
significantly faster than SDNEalign . We do not compare it
5.5 Application to Anomaly Detection with Graph Factorization based methods because although
Anomaly detection is an important application for detecting fast, they are vastly outperformed by deep autoencoder based
malicious activity in networks. We apply DynGEM on the models. Assuming ns iterations to learn a single snapshot
Enron dataset to detect anomalies and compare our results embedding from scratch and ni iterations to learn embed-
with the publicly known events occurring to the company ob- dings when initialized with previous time step embeddings,
served by [Sun et al., 2007]. the expected speedup for a dynamic graph of length T is de-
T ns
We define ∆t as the change in embedding between time t fined as ns +(T −1)ni (ignoring other overheads). We com-
and t − 1: ∆t = kFt+1 (Vt ) − Ft (Vt )kF , and this quantity can pare the observed speedup with the expected speedup. In
be thresholded to detect anomalies. The plot of ∆t with time Table 7, we show that our model achieves speedup closer
on Enron dataset is shown in Figure 3. to the expected speedup as the number of graph snapshots
In the figure, we see three major spikes around week 45, increase due to diminished effect of overhead computations
55 and 94 which correspond to Feb 2001, June 2001 and (e.g. saving, loading, expansion and initialization of the
Jan 2002. These months were associated with the following model, weights and the embedding). Our experiment results
show that DynGEM achieves consistent 2-3X speed up across References
a variety of different networks. [Ahmed et al., 2013] Amr Ahmed, Nino Shervashidze,
Shravan Narayanamurthy, Vanja Josifovski, and Alexan-
6 Conclusion der J Smola. Distributed large-scale natural graph
In this paper, we propose DynGEM, a fast and efficient al- factorization. In 22nd Intl. World Wide Web Conference,
gorithm to construct stable embeddings for dynamic graphs. pages 37–48, 2013.
It uses a dynamically expanding deep autoencoder to capture [Belkin and Niyogi, 2001] Mikhail Belkin and Partha
highly nonlinear first-order and second-order proximities of Niyogi. Laplacian eigenmaps and spectral techniques for
the graph nodes. Moreover, our model utilizes information embedding and clustering. In Proc. 13th Advances in
from previous time steps to speed up the training process by Neural Information Processing Systems, pages 585–591,
incrementally learning embeddings at each time step. Our 2001.
experiments demonstrate the stability of our technique across
time and prove that our method maintains its competitiveness [Bengio et al., 2013] Yoshua Bengio, Aaron Courville, and
on all evaluation tasks e.g., graph reconstruction, link predic- Pascal Vincent. Representation learning: A review and
tion and visualization. We showed that DynGEM preserves new perspectives. IEEE transactions on pattern analysis
community structures accurately, even when a large fraction and machine intelligence, 35(8):1798–1828, 2013.
of nodes (∼ 30%) change communities across time steps. We [Cao et al., 2015] Shaosheng Cao, Wei Lu, and Qiongkai
also applied our technique to successfully detect anomalies, Xu. Grarep: Learning graph representations with global
which is a novel application of dynamic graph embedding. structural information. In Proc. 21st Intl. Conf. on Knowl-
DynGEM shows great potential for many other graph infer- edge Discovery and Data Mining, pages 891–900, 2015.
ence applications such as node classification, clustering etc., [Cao et al., 2016] Shaosheng Cao, Wei Lu, and Qiongkai
which we leave as future work.
Xu. Deep neural networks for learning graph represen-
There are several directions of future work. Our algorithm
tations. In AAAI, pages 1145–1152, 2016.
ensures stability by initializing from the weights learned from
previous time step. We plan to extend it to incorporate the [Chang et al., 2015] Shiyu Chang, Wei Han, Jiliang Tang,
stability metric explicitly with modifications ensuring satis- Guo-Jun Qi, Charu C Aggarwal, and Thomas S Huang.
factory performance on anomaly detection. We also hope to Heterogeneous network embedding via deep architectures.
provide theoretical insight into the model and obtain bounds In Proceedings of the 21th ACM SIGKDD International
on performance. Conference on Knowledge Discovery and Data Mining,
pages 119–128. ACM, 2015.
Acknowledgments [Chen et al., 2015] Tianqi Chen, Ian Goodfellow, and
This work is supported in part by NSF Research Grant IIS- Jonathon Shlens. Net2net: Accelerating learning via
1254206 and IIS-1619458. The views and conclusions are knowledge transfer. arXiv preprint arXiv:1511.05641,
those of the authors and should not be interpreted as repre- 2015.
senting the official policies of the funding agency, or the U.S. [Dai et al., 2017] Hanjun Dai, Yichen Wang, Rakshit
Government. The work was also supported in part by USC Trivedi, and Le Song. Deep coevolutionary network:
Viterbi Graduate PhD fellowship. Embedding user and item features for recommendation.
2017.
[Gehrke et al., 2003] Johannes Gehrke, Paul Ginsparg, and
Jon Kleinberg. Overview of the 2003 kdd cup. ACM
SIGKDD Explorations Newsletter, 5(2):149–151, 2003.
[Glorot et al., 2011] Xavier Glorot, Antoine Bordes, and
Yoshua Bengio. Deep sparse rectifier neural networks. In
Proc. 14th Intl. Conf. on Artificial Intelligence and Statis-
tics, page 275, 2011.
[Goyal and Ferrara, 2017] Palash Goyal and Emilio Ferrara.
Graph embedding techniques, applications, and perfor-
mance: A survey. arXiv preprint arXiv:1705.02801, 2017.
[Grover and Leskovec, 2016] Aditya Grover and Jure
Leskovec. node2vec: Scalable feature learning for net-
works. In Proc. 22nd Intl. Conf. on Knowledge Discovery
and Data Mining, pages 855–864, 2016.
[Hamilton et al., 2016] William L Hamilton, Jure Leskovec,
and Dan Jurafsky. Diachronic word embeddings re-
veal statistical laws of semantic change. arXiv preprint
arXiv:1605.09096, 2016.
[Huang et al., 2017a] Xiao Huang, Jundong Li, and Xia Hu. [Sutskever et al., 2013] Ilya Sutskever, James Martens,
Accelerated attributed network embedding. In Proceed- George E Dahl, and Geoffrey E Hinton. On the impor-
ings of the 2017 SIAM International Conference on Data tance of initialization and momentum in deep learning.
Mining, pages 633–641. SIAM, 2017. In Proc. 30th Intl. Conf. on Machine Learning, pages
[Huang et al., 2017b] Xiao Huang, Jundong Li, and Xia Hu. 1139–1147, 2013.
Label informed attributed network embedding. In Pro- [Tang et al., 2015] Jian Tang, Meng Qu, Mingzhe Wang,
ceedings of the Tenth ACM International Conference on Ming Zhang, Jun Yan, and Qiaozhu Mei. Line: Large-
Web Search and Data Mining, pages 731–739. ACM, scale information network embedding. In 24th Intl. World
2017. Wide Web Conference, pages 1067–1077, 2015.
[Klimt and Yang, 2004] Bryan Klimt and Yiming Yang. The [Tenenbaum et al., 2000] Joshua B Tenenbaum, Vin
enron corpus: A new dataset for email classification re- De Silva, and John C Langford. A global geomet-
search. In European Conference on Machine Learning, ric framework for nonlinear dimensionality reduction.
pages 217–226, 2004. science, 290(5500):2319–2323, 2000.
[Wang and Wong, 1987] Yuchung J Wang and George Y
[Kulkarni et al., 2015] Vivek Kulkarni, Rami Al-Rfou,
Wong. Stochastic blockmodels for directed graphs. Jour-
Bryan Perozzi, and Steven Skiena. Statistically significant
nal of the American Statistical Association, 82(397):8–19,
detection of linguistic change. In 24th Intl. World Wide
1987.
Web Conference, pages 625–635. ACM, 2015.
[Wang et al., 2016] Daixin Wang, Peng Cui, and Wenwu
[Leskovec and Krevl, 2014] Jure Leskovec and Andrej
Zhu. Structural deep network embedding. In Proc. 22nd
Krevl. SNAP Datasets: Stanford large network dataset Intl. Conf. on Knowledge Discovery and Data Mining,
collection, 2014. pages 1225–1234, 2016.
[Leskovec et al., 2007] Jure Leskovec, Jon Kleinberg, and [Zhu et al., 2016] Linhong Zhu, Dong Guo, Junming Yin,
Christos Faloutsos. Graph evolution: Densification and Greg Ver Steeg, and Aram Galstyan. Scalable temporal
shrinking diameters. ACM Transactions on Knowledge latent space inference for link prediction in dynamic so-
Discovery from Data (TKDD), 1(1):2, 2007. cial networks. IEEE Transactions on Knowledge and Data
[Maaten and Hinton, 2008] Laurens van der Maaten and Ge- Engineering, 28(10):2765–2777, 2016.
offrey Hinton. Visualizing high-dimensional data us-
ing t-sne. Journal of Machine Learning Research,
9(Nov):2579–2605, 2008.
[Niepert et al., 2016] Mathias Niepert, Mohamed Ahmed,
and Konstantin Kutzkov. Learning convolutional neural
networks for graphs. In International Conference on Ma-
chine Learning, pages 2014–2023, 2016.
[Ou et al., 2016] Mingdong Ou, Peng Cui, Jian Pei, Ziwei
Zhang, and Wenwu Zhu. Asymmetric transitivity preserv-
ing graph embedding. In Proc. 22nd Intl. Conf. on Knowl-
edge Discovery and Data Mining, pages 1105–1114, 2016.
[Park et al., 2009] Youngser Park, C Priebe, D Marchette,
and Abdou Youssef. Anomaly detection using scan statis-
tics on time series hypergraphs. In Link Analysis, Coun-
terterrorism and Security (LACTS) Conference, page 9,
2009.
[Perozzi et al., 2014] Bryan Perozzi, Rami Al-Rfou, and
Steven Skiena. Deepwalk: Online learning of social rep-
resentations. In Proc. 20th Intl. Conf. on Knowledge Dis-
covery and Data Mining, pages 701–710, 2014.
[Roweis and Saul, 2000] Sam T Roweis and Lawrence K
Saul. Nonlinear dimensionality reduction by locally lin-
ear embedding. science, 290(5500):2323–2326, 2000.
[Sun et al., 2007] Jimeng Sun, Christos Faloutsos, Spiros
Papadimitriou, and Philip S Yu. Graphscope: parameter-
free mining of large time-evolving graphs. In Proc. 13th
Intl. Conf. on Knowledge Discovery and Data Mining,
pages 687–696, 2007.

You might also like