Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
survey
Open access

A Survey of Graph Neural Networks for Social Recommender Systems

Published: 22 June 2024 Publication History

Abstract

Social recommender systems (SocialRS) simultaneously leverage the user-to-item interactions as well as the user-to-user social relations for the task of generating item recommendations to users. Additionally exploiting social relations is clearly effective in understanding users’ tastes due to the effects of homophily and social influence. For this reason, SocialRS has increasingly attracted attention. In particular, with the advance of graph neural networks (GNN), many GNN-based SocialRS methods have been developed recently. Therefore, we conduct a comprehensive and systematic review of the literature on GNN-based SocialRS.
In this survey, we first identify 84 papers on GNN-based SocialRS after annotating 2,151 papers by following the PRISMA framework (preferred reporting items for systematic reviews and meta-analyses). Then, we comprehensively review them in terms of their inputs and architectures to propose a novel taxonomy: (1) input taxonomy includes five groups of input type notations and seven groups of input representation notations; (2) architecture taxonomy includes eight groups of GNN encoder notations, two groups of decoder notations, and 12 groups of loss function notations. We classify the GNN-based SocialRS methods into several categories as per the taxonomy and describe their details. Furthermore, we summarize benchmark datasets and metrics widely used to evaluate the GNN-based SocialRS methods. Finally, we conclude this survey by presenting some future research directions. GitHub repository with the curated list of papers are available at https://github.com/claws-lab/awesome-GNN-social-recsys

1 Introduction

With the advent of online social network platforms (e.g., Facebook, X (formerly known as Twitter), Instagram and), there has been a surge of research efforts in developing social recommender systems (SocialRS), which simultaneously utilize user-user social relations along with user-item interactions to recommend relevant items to users. Exploiting social relations in recommendation works well because of the effects of social homophily [62] and social influence [61]: (1) social homophily indicates that a user tends to connect herself to other users with similar attributes and preferences, and (2) social influence indicates that users with direct or indirect relations tend to influence each other to make themselves become more similar. Accordingly, SocialRS can effectively mitigate the data sparsity problem by exploiting social neighbors to capture the preferences of a sparsely interacting user.
Literature has shown that SocialRS can be applied successfully in various recommendation domains (e.g., product [104, 106], music [120, 121, 122], location [39, 74, 103], and image [88, 102, 105]), thereby improving user satisfaction. Furthermore, techniques and insights explored from SocialRS can also be exploited in real-world applications other than recommendations. For instance, García-Sánchez et al. [20] leveraged SocialRS to design a decision-making system for marketing (e.g., advertisement), while Gasparetti et al. [21] analyzed SocialRS in terms of community detection (CD).
Motivated by such wide applicability, there has been an increasing interest in research on developing accurate SocialRS models. In the early days, research focused on matrix factorization (MF) techniques [28, 54, 55, 56, 57, 86, 116]. However, MF-based methods cannot effectively model the complex (i.e., non-linear) relationships inherent in user-user social relations and user-item interactions [78]. Motivated by this, most recent works have focused on applying deep-learning techniques to SocialRS, e.g., autoencoder [11, 119], generative adversarial networks (GAN) [35], and graph neural networks (GNN) [16, 105].
In particular, since user-item interactions and user-user social relations can naturally be represented as graph data, GNN-based SocialRS has increasingly attracted attention in the literature. As a demonstration, Figure 1 shows that the number of papers related to GNN-based SocialRS has increased consistently since 2019. Given the growing and timely interest in this area, we survey GNN-based SocialRS methods in this survey.
Fig. 1.
Fig. 1. The number of papers related to GNN-based SocialRS per year. \(^\star\) For 2022, we count the number of relevant papers published until October.

1.1 Challenges

Applying GNN into SocialRS is not trivial and faces the following challenges:
Input representation. The input data should be modeled appropriately into a heterogeneous graph structure. Many SocialRS methods build two separate graphs: one where nodes represent users and items, and edges represent user-item interactions; the other where nodes represent users and edges represent user-user social relations. Thus, GNN methods for SocialRS need to extract knowledge from both the networks simultaneously for accurate inference. This is in contrast with most regular GNNs that consider only a single network. Additionally, we note that there are valuable input features in the two networks, such as user/item attributes, item knowledge/relation, and group information. Thus, methods fuse features along with network information in GNN-based SocialRS. In this survey, we discuss the input types used in GNN-based SocialRS methods and the different ways they are represented as graphs.
Design of GNN encoder. The performance of GNN-based SocialRS methods relies heavily on their GNN encoders, which aim to represent users and items into low-dimensional embeddings. For this reason, existing SocialRS methods have explored various design choices regarding GNN encoders and have adopted different architectures according to their goals. For instance, many SocialRS methods employ the graph attention neural network (GANN) [90] to differentiate each user’s preference for items or each user’s influence on their social friends. On the other hand, some methods [22, 66, 67, 84, 115] use the graph recurrent neural networks (GRNN) [69, 124] to model the sequential behaviors of users. It should be noted that GNN encoders for SocialRS need to simultaneously consider the characteristics of user-item interactions and user-user social relations. This is in contrast with GNN encoders for non-SocialRS that model only user-item interactions. In this survey, we discuss different types of GNN encoders used by SocialRS methods.
Training. The training of GNN-based SocialRS should be designed to reflect users’ tastes and items’ characteristics in the embeddings for the corresponding users and items. To this end, SocialRS methods employ well-known loss functions, such as mean squared error (MSE), Bayesian personalized ranking (BPR) [72], and cross-entropy (CE), to reconstruct user behaviors. Furthermore, to mitigate the data sparsity problem, some works have additionally employed auxiliary loss functions such as self-supervised loss (SSL) [49] and group-based loss [36, 42]. It is worth mentioning that loss functions used by GNN-based SocialRS are designed so that rich structural information such as motifs and user attributes can be exploited. These are not considered by loss functions for non-SocialRS. In this survey, we discuss the training remedies of GNN-based SocialRS methods to learn the user and item embeddings.

1.2 Related Surveys

Most of the existing surveys, which fully cover SocialRS papers, focus either on traditional methods [7, 14, 68, 77, 87, 113, 118] (e.g., MF), feature information [79] (e.g., context), or a specific application [21] (e.g., CD). On the other hand, the other related surveys [12, 19, 96, 107] focus on graph-based recommender systems, including GNN-based RS methods, but they partially cover SocialRS papers in their surveys. A comparison between the current survey and the previous surveys is shown in Table 1.
Table 1.
SurveysTopicsGNN-based SocialRS PapersScope
SocialRSGNN# PapersLatest Year
[7, 14, 87, 113, 118] 0-Traditional SocialRS
[21] 0-SocialRS for CD
[79] 0-General SocialRS
[78]12019General SocialRS
[12]22019Graph-based RS
[96]32020Graph-based RS
[107]142021GNN-based RS
[19]192021GNN-based RS
Ours80Oct, 2022GNN-based SocialRS
Table 1. Comparison with Existing Surveys
✓: fully covered, ✓: partially covered.
# Papers: the number of GNN-based SocialRS papers included in the survey.
Latest year: the latest publication year of a relevant paper included in the survey.
For each survey, we summarize the topics covered, some statistics regarding GNN-based SocialRS papers (i.e., relevant papers), and the main scope to survey.
Specifically, several survey papers on SocialRS have been published before 2019 [7, 14, 68, 77, 87, 113, 118]. However, they only focus on traditional methods such as MF and collaborative filtering. These surveys largely ignore methods that use modern-day deep-learning techniques, in particular GNN.
More recent surveys discuss the taxonomy of social recommendation, starting the comparison of deep-learning based techniques [21, 78, 79]. However, Shokeen and Rana [79] only focus on the taxonomy of feature information regarding social relations, such as context, trust, and group, used in SocialRS methods, while Gasparetti et al. [21] only discuss SocialRS methods using CD techniques. Shokeen and Rana [78] include just one social recommendation method based on GNNs.
With the advent of GNNs in recommender systems, multiple surveys have been conducted on graph-based recommender systems [12, 19, 96, 107]. However, their focus is not on SocialRS as they consider different kinds of recommender systems where graph-learning is employed. They cover only a small section of most representative papers on GNN-based SocialRS. Thus, one cannot rely on these surveys to gain insights on the ever-increasing field of using GNNs for SocialRS.
As shown in Table 1, no survey paper exists in the literature that focuses specifically on GNN-based SocialRS methods. In the current work, we aim to fill this gap by providing a comprehensive and systematic survey on GNN-based SocialRS methods.

1.3 Contributions

The main contribution of this survey article is summarized as follows:
The First Survey in GNN-based SocialRS: To the best of our knowledge, we are the first to systematically dedicate ourselves to reviewing GNN-based SocialRS methods. Most of the existing surveys focus either on traditional methods [7, 14, 68, 77, 87, 113, 118] (e.g., MF), feature information [79] (e.g., context), or a specific application [21] (e.g., CD). The other related surveys [12, 19, 96, 107] focus on graph-based recommender systems, but they partially cover SocialRS.
Comprehensive Survey: We systematically identify the relevant papers on GNN-based SocialRS by following the guidelines of the preferred reporting items for systematic reviews and meta-analyses (PRISMA framework) [64]. Then, we comprehensively review them in terms of their inputs and architectures. Figure 2 provides a brief timeline of GNN-based SocialRS methods. In addition, Figure 3 shows the number of relevant papers published in relevant journals (e.g., IEEE TKDE and ACM TOIS) and conferences (e.g., WWW, ACM SIGIR, and ACM CIKM).
Novel Taxonomy of Inputs and Architectures: We provide a novel taxonomy of inputs and architectures in GNN-based SocialRS methods, enabling researchers to capture the research trends in this field easily. An input taxonomy includes five groups of input type notations and seven groups of input representation notations. On the other hand, an architecture taxonomy includes eight groups of GNN encoder notations, two groups of decoder notations, and 12 groups (four for primary losses and eight for auxiliary losses) of loss function notations.
Benchmark Datasets: We review 17 benchmark datasets used to evaluate the performance of GNN-based SocialRS methods. We group the datasets into eight domains (i.e., product, location, movie, image, music, bookmark, microblog, and miscellaneous). Also, we present some statistics for each dataset and a list of papers using the dataset.
Future Directions: We discuss the limitations of existing GNN-based SocialRS methods and provide several future research directions.
Fig. 2.
Fig. 2. A timeline of GNN-based SocialRS methods. We categorize methods according to their GNN encoders: graph convolutional network (GCN), lightweight GCN (LightGCN), GANN, heterogeneous GNN (HetGNN), GRNN, hypergraph neural networks (HyperGNN), graph autoencoder (GAE), and hyperbolic GNN. It should be noted that some methods employ two or more GNN encoders in their architectures.
Fig. 3.
Fig. 3. The number of GNN-based SocialRS papers published in relevant journals and conferences. We only present statistics with respect to prominent data mining journals (including IEEE TKDE, ACM TOIS, Knowledge-Based Systems, and Information Sciences) and conferences (including WWW, ACM SIGIR, ACM KDD, ACM CIKM, ACM WSDM, IEEE ICDE, and IEEE ICDM). We believe it would help researchers in this field to identify appropriate venues where GNN-based SocialRS papers are published.
The rest of this survey paper is organized as follows. In Section 2, we introduce the survey methodology based on PRISMA [64] that collects the papers on GNN-based SocialRS thoroughly. In Section 3, we define the social recommendation problem. In Sections 4 and 5, we review 84 GNN-based SocialRS methods in terms of their inputs and architectures, respectively. We summarize 17 benchmark datasets and 8 evaluation metrics, widely-used in GNN-based SocialRS methods, in Section 6. Section 7 discusses future research directions. Finally, we conclude the article in Section 8.

2 Survey Methodology

Following the guidelines set by the PRISMA [64], the Scopus index was queried to filter for relevant literature. In particular, the following query was run on October 14, 2022, resulting in 2,151 papers.
TITLE-ABS-KEY (social AND (recommendation OR recommender) AND graph) AND ( PUBYEAR > 2009) AND ( LIMIT-TO ( LANGUAGE , “English”))
To obtain the final list of relevant papers for the current survey, an iterative strategy of manual reviewing and filtering was carried out, following PRISMA guidelines. Four expert annotators were used to select the relevant papers. Before reviewing the papers, a comprehensive and exhaustive discussion was held among the annotators to discuss and agree upon the definitions of the main concepts that a paper is to be examined for before including it in the survey. These included concepts of GNNs and Social Recommendation.
Based on these guidelines, each annotator labeled one batch of 200 papers together. Each paper in this batch was assigned one of the three categories by each annotator: “Yes”, “No”, and “Maybe”. “Yes” represents full confidence of relevance, “Maybe” represents some confidence of relevance, and “No” represents full confidence of irrelevance of the paper for the current survey. A high inter-annotator agreement of 0.845 among the annotators was reported on this set.
The remaining papers were then divided equally among the annotators without any overlap. The annotator assigned each paper a label of “Yes”, “No”, or “Maybe”. Papers marked “Maybe” were reviewed again by the other annotators to reach a consensus. Finally, papers marked “Yes” were collected together and these served as the focus of our survey. Through this comprehensive process of filtering, we finally found 84 papers that study GNN-based SocialRS for our survey paper.

3 Notations AND Problem Definition

The social recommendation problem is formulated as follows. Let \(\mathcal {U}=\lbrace p_1,p_2,\cdots ,p_m\rbrace\) and \(\mathcal {I}=\lbrace q_1,q_2,\cdots ,q_n\rbrace\) be sets of m users and n items, respectively. Also, \(\mathbf {R}\in \mathbb {R}^{m\times n}\) represents a rating matrix that stores user-item ratings (that we call U-I rating). \(\mathbf {S}\in \mathbb {R}^{m\times m}\) represents a social matrix that stores user-user social relations (that we call U-U social). In addition, \(\mathcal {N}_{p_i}\) indicates a set of items rated by a user \(p_i\) . In this article, we use bold uppercase letters and bold lowercase letters to denote matrices and vectors, respectively. Also, we use calligraphic letters to denote sets and graphs. Table 2 summarizes a list of notations used in this article.
Table 2.
NotationDescription
\(\mathcal {U}\) , \(\mathcal {I}\) Sets of users \(p_i\) and items \(q_j\)
\(\mathbf {R}\) , \(\mathbf {S}\) Matrices representing U-I rating and U-U social
\(\mathcal {N}_{p_i}\) Set of items rated by \(p_i\)
\(\mathbf {u}_i^I\) Embedding of \(p_i\) obtained via the user interaction encoder
\(\mathbf {u}_i^S\) Embedding of \(p_i\) obtained via the user social encoder
\(\mathbf {u}_i\) Embedding of \(p_i\) obtained by fusing \(\mathbf {u}_i^I\) and \(\mathbf {u}_i^S\)
\(\mathbf {v}_j\) Embedding of \(q_j\) via the item encoder
\(r_{ij}\) Real rating score of \(p_i\) on \(q_j\)
\(\hat{r}_{ij}\) Predicted preference of \(p_i\) on \(q_j\) via the decoder
Table 2. Notations used in this article
The goal of GNN-based SocialRS methods is to solve the rating prediction and/or top-N recommendation tasks. Given \(\mathbf {R}\) and \(\mathbf {S}\) , both tasks are formally defined as follows:
Problem 1 (Rating Prediction).
The goal is to predict the rating values for unrated items (i.e., \(\mathcal {I}\) \ \(\mathcal {N}_{p_i}\) ) in \(\mathbf {R}\) as close as possible to the ground truth.
Problem 2 (Top-N Recommendation)
The goal is to recommend the top-N items that are most likely to be preferred by each user \(p_i\) among \(p_i\) ’s unrated items (i.e., \(\mathcal {I}\) \ \(\mathcal {N}_{p_i}\) ).

4 Taxonomy of Inputs

In this section, we present a taxonomy of inputs for GNN-based SocialRS. Figures 4 and 5 depict the input types and their representations, respectively. In the subsequent subsections, we describe each of these in detail.
Fig. 4.
Fig. 4. Overview of input types used by GNN-based SocialRS methods.
Fig. 5.
Fig. 5. Overview of input representations used by GNN-based SocialRS methods.

4.1 Input Types: Types of Inputs to the Models

In this subsection, we group the input types used by GNN-based SocialRS into 5 categories: user-item ratings, user-user social relations, attributes, knowledge graph (KG), and groups. Table 3 categorizes all papers based on the input data types they use.
Table 3.
U-I RatingU-U SocialAdditional FeaturesModels
UserItem
StaticHomogeneousGraphRec [16], DANSER [106], DICER [18], ASR [32], GNNTSR [59], GAT-NSR [65], SoRecGAT [91], SAGLG [48], PA-GAN [27], MGNN [111], MutualRec [110], GHSCF [2], HIDM [40], SocialLGN [43], SOAP-VAE [92], GraphRec+ [17], DSR [75], SHGCN [134], GTN [26], PDARec [131], MHCN [122], SEPT [120], DcRec [103], GSFR [109], APTE [130], EAGCN [102], HOSR [50], SDCRec [15], SoHRML [51],DESIGN [88] HyperSoRec [93], CGL [127], DISGCN [38], ME-LGN [63], GDSRec [6], SGA [53] SIGA [47], ESRF [121], Motif-Res [85], FeSoG [52], GDMSR [71], MADM [58], DSL [97]
KGKConvGraphRec [89], HeteroGraphRec [73], Social-RippleNet [31], SCGRec [117]
AttributesGNN-SOR [23], MPSR [46], FBNE [5]
AttributesAttributesDiffnet [105], Diffnet++ [104], DiffnetLG [80], IDiffNet [41], MEGCN [33], SAN[30], SRAN [112], MrAPR [81], SENGR [76], TAG [70], ATGCN [74], HSGNN [100]
GroupsIGRec [10], GLOW [36]
AttributesGMAN [42]
MultipleAttributesDH-HGCN [24]
AttributesAttributesBFHAN [128]
TemporalHomogeneousEGFRec [22], FuseRec [66], DGARec-R [84], MGSR [67], MOHCN [99], GNNRec [45], DCAN [94], SGHAN [101], SSRGNN [9], GNN-DSR [44], DGRec [83], DREAM [82], SeRec [8], TGRec [1]
KGSSDRec [115]
MultipleHomogeneousSR-HGNN [114]
Table 3. Taxonomy of Input Types

4.1.1 User-Item Rating.

Users interact with different items as they rate them, thus forming the rating matrix \(\mathbf {R}\in \mathbb {R}^{m \times n}\) . Therefore, each user has a list of items that he/she has interacted with along with the corresponding rating.
The timestamp of the user-item interaction may also be available and can be exploited to recommend items to users at specific points in time. Each rating can thus also be associated with a timestamp for that rating. Some models exploit the temporal information to make more-effective recommendations in continuous time [1, 66, 84] or during a user session [22, 82, 83, 115].
Furthermore, one may also have multi-typed user-item interactions. For example, a user may interact with an item positively (positive rating) or negatively (negative rating). Some models have distinguished among these different interaction types to predict each type more effectively [114].

4.1.2 User-User Social.

The second essential input to SocialRS is the social adjacency matrix \(\mathbf {S} \in \mathbb {R}^{m \times m}\) , storing user-user social relations.
People may be connected to each other via different kinds of social relations. For example, two users may be related if they are friends or if they may co-comment on an item or if one follows the other, and so on. DH-HGCN [24] and BFHAN [128] consider multifaceted, heterogeneous user-user relations in the social network.

4.1.3 Additional Features.

Attributes. Both user and items may have additional attributes that can be encoded by the models to make better social recommendations. User attributes are often features of user profiles on social media, for example, age, sex, and so on., while item attributes are often information about the items such as its price and category. Some models just incorporate user attributes [74, 110], some only item attributes [8, 23], and others incorporate both [30, 33, 80, 104, 105].
Knowledge Graph (KG). Items are structured on a product site in the form of a KG where items are related with each other if they have some mutual dependency. Models incorporate such dependencies between items as represented by this KG [73, 89].
Groups. Users are often grouped together denoting a group structure among them. For instance, multiple users can form an online social group based on similar interests or hobbies. Models incorporate the group membership in addition to the social relations to model the social network more effectively [10, 36, 42]. User groups can also be formed based on the businesses that they are part of or are clients of, as in [5].

4.2 Input Representations: Representation of Inputs within the Models

In order to effectively use the available inputs with GNN-based models, SocialRS methods represent them as different graphs. In particular, the input representations employed by GNN-based SocialRS can be grouped into 7 categories: U-U/U-I graphs, U-U-I graph, attributed graph, multiplex graph, U-U/U-I/I-I graphs, hypergraph, and decentralized. Table 4 categorizes papers based on the input representation they develop using the input data.
Table 4.
Graph RepresentationsModels
U-U/U-IGraphRec [16], DANSER [106], DICER [18], ASR [32], GNNTSR [59], GAT-NSR [65], SoRecGAT [91], SAGLG [48], PA-GAN [27], MGNN [111], MutualRec [110], GHSCF [2], HIDM [40], SocialLGN [43], SOAP-VAE [92], GraphRec+ [17], DSR [75], GTN [26], PDARec [131], SEPT [120], DcRec [103], GSFR [109], APTE [130], EAGCN [102], HOSR [50], SDCRec [15], SoHRML [51], DESIGN [88], HyperSoRec [93], CGL [127], DISGCN [38], GDSRec [6], SGA [53], SIGA [47], ESRF [121], SR-HGNN [114], EGFRec [22], FuseRec [66], DGARec-R [84], MGSR [67], GNNRec [45], DCAN [94], SGHAN [101] GNN-DSR [44], DGRec [83], DREAM [82], SeRec [8], TGRec [1], MADM [58], DSL [97]
U-U-ISHGCN [134], SSRGNN [9], ME-LGN [63], SENGR [76], IGRec [10], GLOW [36], GDMSR [71]
AttributedDiffNet [105], DiffNet++ [104], DiffNet-LG [80], IDiffNet [41], MEGCN [33], SAN [30], SRAN [112], MrAPR [81], SENGR [76], TAG [70], ATGCN [74], HSGNN [100], GMAN [42], GNN-SOR [23], MPSR [46], FBNE [5], DH-HGCN [24], BFHAN [128]
MultiplexDH-HGCN [24], BFHAN [128]
U-U/U-I/I-IKConvGraph [89], HeteroGraphRec [73], Social-RippleNet [31], SCGRec [117], SSDRec [115], DGNN [108]
HypergraphDH-HGCN [24], MHCN [122], SHGCN [134], Motif-Res [85], MOHCN [99]
DecentralizedFeSoG [52]
Table 4. Taxonomy of Input Representations

4.2.1 U-U/U-I Graphs.

The simplest representation of the input for social recommendation is to use separate graphs for a user-user social network and a user-item interaction network. The user-item interaction network is represented as a bipartite graph and the user-user social network is represented as a general undirected/directed graph. Information from the two graphs is encoded separately at the common user node and later aggregated. Most works follow this representation to encode users and items [16, 17, 18, 40, 80, 88, 102, 104, 105, 106, 110, 111].

4.2.2 U-U-I Graph.

Both kinds of user-user relations and user-item interactions can be modeled together by a single graph as well. Here, user-user edges and user-item edges in the graph need to be distinguished by the type of the end node. Many works thus merge the social relation edges and interaction edges together in a single graph to obtain node embeddings for both users and items [9, 63, 76, 134].

4.2.3 Attributed Graph.

Both user and item nodes may further contain features describing the corresponding entity. For example, users may have profile features while items may have their description features. These features are first encoded numerically and then represented explicitly as node attributes in the U-U/U-I graph or U-U-I graph to make effective recommendations [30, 74, 80, 81, 104, 105, 112]. These attributes are either fused with the learned embeddings or are used as initialization for the GNN layers.

4.2.4 Multiplex Graph.

Users may be related to each other via multiple relationships while they may also interact with items in multiple ways. Such relationships are often represented using a multiplex network, that is, using multiple layers of the U-U/U-I graph, where each layer represents a particular relation type [24, 114, 128].

4.2.5 U-U/U-I/I-I Graphs.

When information on item-item relations is available, an item-item KG is considered in addition to the U-U and U-I graphs. Item embeddings are now obtained separately one from the U-I interaction graph and the other from the item-item KG and then aggregated later to obtain the final item embedding [73, 89].

4.2.6 Hypergraph.

One may want to incorporate higher-order relations among users and items to explicitly establish organizational properties in the input such as (1) constructing a user-only hyperedge if a group of users are connected together in closed motifs, (2) constructing a user-item joint hyperedge if a group of users interacts with the same item, and (3) constructing an item-item hyperedge if one user interacts with a group of items. Models have been developed to include just user-item joint hyperedges [134], both user-user and user-item joint hyperedges [122], and user-user and item-item hyperedges [24].

4.2.7 Decentralized.

Centralized data storage is becoming infeasible in practice due to rising privacy concerns. Thus, instead of storing the complete U-U/U-I graphs together, a decentralized storage of the graphs is often required. Here, the edges for social relations and interactions of each user are stored locally at each user’s local server such that only non-sensitive data is shared with the centralized server [52]

5 Taxonomy of Architectures

In this section, we present the taxonomy of architectures for GNN-based SocialRS. Model architectures consist of three key components as shown in Figure 6: (C1) encoders; (C2) decoders; (C3) loss functions. Using the U-U and U-I graphs in (C1), the encoders create low-dimensional vectors (i.e., embeddings) for users and items by employing different GNN encoders. Here, some works exploit additional information of users and/or items (e.g., their attributes and groups; refer to Section 4) to construct more-accurate user and item embeddings. In Figure 6, the dashed lines show which encoders use each additional piece of information. In (C2), the decoders predict each user’s preference on each item via different operations on the user and item embeddings obtained from (C1). Finally, in (C3), different loss functions are optimized to learn the embeddings in an end-to-end manner. We discuss the advantages and disadvantages of each encoder in Table 5 while the loss functions are discussed in Table 6. In the subsequent subsections, we describe each component of GNN-based SocialRS in detail.
Table 5.
EncoderComplexityRepresentativenessAdditional FeaturesKnown Issues
GCNLowLowNoneOversmoothing
LightGCNVery LowLowerNoneLinear representation
GANNHighHighNoneOversmoothing, more parameters
HetGNNHighHigherHeterogeneous interactionsExtra annotation, Hard to
    generalize over interactions
GRNNVery HighHigherTemporal interactionsVanishing and exploding gradients
HyperGNNHigherHigherHigh-order relationsExtra motif annotation
GAEVery HighHighGenerativeHarder to optimize
Hyperbolic GNNHighVery HighHierarchical natureLess empirical evidence
Table 5. High-Level Comparison of Different Encoders
Table 6.
Loss functionComplexityBenefitsKnown Issues
MSELowLearns continuous ratingsSensitive to outliers
BPRLowLearns rankings between itemsCannot handle continuous ratings
CELowSuited for classificationCannot handle continuous ratings
HingeLowFaster convergenceCannot handle continuous ratings
Social LPLowMore suitable social embeddingsMulti-objective trade-off
SSLHighMore informed representationsExpensive pre-processing
GroupLowGroup-level representationsGroups form for specific items
   (common interests)
AdvHighMore robust representationsAdversarial instability and cost
PathHighPredicts social influence propagationComplicated auxiliary task
KDHighLess overfittingTraining multiple models
SentimentHighUser sentiment-weighed ratingsComplex sentiment classification task
Policy Net.HighImportance weights to each componentExpensive weight allocation
Table 6. High-Level Comparison of Different Loss Functions
Fig. 6.
Fig. 6. Overview of architectures for GNN-based SocialRS methods. The solid lines represent the common procedure of the GNN-based SocialRS methods, while the dashed lines represent the flows for the auxiliary inputs or the losses.

5.1 Encoders

We group the encoders of GNN-based SocialRS into eight categories: GCN, LightGCN, GANN, HetGNN, GRNN, HyperGNN, GAE, and hyperbolic GNN. Table 7 shows the taxonomy of encoders used in existing work in detail. Figures 7, 8, and 9 present the conceptual views for distinct types of GNN encoders.
Table 7.
User SocialUser InterestItem EncoderModels
GANNGANNGANNGraphRec [16], DiffNet++ [104], DiffNetLG [80], MutualRec [110], SR-HGNN [114], ASR [32], GNNTSR [59], GAT-NSR [65], SoRecGAT [91], PA-GAN [27], SOAP-VAE [92], GTN [26], PDARec [131], FeSoG [52], ESRF [121], SoHRML [51], FBNE [5], DISGCN [38], ME-LGN [63], SGA [53], GDSRec [6], DANSER [106], GraphRec+[17], SRAN [112], TAG [70], SDCRec [15], KConvGraph [89], HeteroGraphRec [73], SocialRippleNet [31], SCGRec [117], TGRec [1], GSFR [109], IGRec [10], DICER [18]
EmbSAN [30], HIDM [40], GLOW [36], GMAN [42], HSGNN [100], MrAPR [81]
GCNBFHAN [128], SHGCN [134]
GRNNGRNNDGARec-R [84], SSRGNN [9], GNN-DSR [44]
EmbDREAM [82]
RNNGANNFuseRec [66]
RNNSGHAN [101]
EmbSSDRec [115]
--GHSCF [2]
GCNGCNEmbDiffNet [105], MEGCN [33], MPSR [46], HOSR [50],
GCNATGCN [74], SENGR [76], SAGLG [48], GNN-SOR [23], GDMSR [71], MADM [58], DSL [97]
GRNNGRNNGNNRec [45], MGSR [67]
GCNEGFRec [22]
EmbDGRec [83]
RNNGANNMOHCN [99]
MLPEmbMGNN [111]
LightGCNLightGCNLightGCNSocialLGN [43], DcRec [103], APTE [130], EAGCN [102], CGL [127] SEPT [120], DSR [75], DESIGN [88]
EmbIDiffNet [41]
HetGNNHetGNNHetGNNSeRec [8], DGNN [108]
GANNDCAN [94]
HyperGNNGANNHyperGNNMHCN [122], Motif-Res [85], DH-HGCN [24]
GAEGAEGAESIGA [47]
HyperbolicHyperbolicHyperbolicHyperSoRec [93]
Table 7. Taxonomy of Encoder Architectures
It should be noted that some methods employ non-GNN encoders (e.g., RNN, MLP, or just an embedding vector (Emb)) or no encoders (i.e., \(-\) ) to obtain embeddings.
Fig. 7.
Fig. 7. Conceptual views of the GCN, LightGCN, GANN, and HetGNN encoders with a single GNN layer. The left side represents user embeddings, while the right side represents item embeddings.
Fig. 8.
Fig. 8. Conceptual view of the GRNN encoder with a single GNN layer. Note that existing work did not use GRNN encoders to obtain user social embeddings.
Fig. 9.
Fig. 9. Conceptual view of the HyperGNN encoder with a single GNN layer. Note that existing work did not distinguish between user embeddings as interaction and social embeddings, nor did it utilize HyperGNN encoders to generate item embeddings.
Generally, in (C1) encoders, most methods represent each user \(p_i\) into two types of low-dimensional vectors (i.e., embeddings) by employing a GNN encoder: \(p_i\) ’s interaction embedding \(\mathbf {u}_i^I\) based on a U-I graph and \(p_i\) ’s social embedding \(\mathbf {u}_i^S\) based on a U-U graph. Then, they aggregate them into one embedding \(\mathbf {u}_i\) for the corresponding user \(p_i\) . In the meantime, they also obtain each item \(q_j\) ’s embedding \(\mathbf {v}_j\) via another GNN encoder using a U-I graph. As mentioned above, some works enhance these embeddings by incorporating additional input representations, such as user/item attributes and hypergraphs.
It should be noted that some works employ only a single GNN encoder to obtain the two embeddings. In contrast, others use different GNN encoders for the embeddings of different node types (i.e., users or items). For simplicity, however, we here, explain the GNN encoders by generalizing them to any node type in the input graph.

5.1.1 GCN.

Early works [23, 33, 46, 48, 50, 74, 76, 105, 111, 133] have focused on representing the user and item embeddings using GCN. Given a node \(n_i\) (i.e., a user or an item) in the input graph (i.e., U-I or U-U graphs), a \(n_i\) ’s embedding \(\mathbf {e}_{i}^{k}\) in kth layer is represented based on the embeddings of \(n_i\) ’s neighbors in \((k-1)\) th layer as follows:
\begin{equation} \mathbf {e}_{i}^{(k)} = \sigma \left(\sum \limits _{n_j \in \mathcal {N}_{n_i}} \frac{1}{\sqrt {|\mathcal {N}_{n_j}||\mathcal {N}_{n_i}|}}\mathbf {e}_{j}^{(k-1)} \mathbf {W}^{(k)}\right), \end{equation}
(1)
where \(\sigma\) and \(\mathbf {W}^{(k)} \in \mathbb {R}^{d \times d}\) denote a non-linear activation function (e.g., ReLU) and a trainable transformation matrix, respectively. Also, \(\mathcal {N}_{n_i}\) indicates a set of \(n_i\) ’s neighbors in the input graph. Here, some works take the self-connection of \(n_i\) into consideration by aggregating over the set \(\mathcal {N}_{n_i} \cup \lbrace n_i\rbrace\) . Most methods simply consider the \(n_i\) ’s embedding in the last Kth layer, \(\mathbf {e}_{i}^{(K)}\) , as its final embedding \(\mathbf {z}_{i}\) . Another variant is to aggregate \(n_i\) ’s embeddings from all layers, that is, \(\mathbf {z}_{i}=\sum ^K_{k=1} \mathbf {e}_{i}^{(k)}\) . For instance, DiffNet [105] obtains a user \(p_i\) ’s social embedding \(\mathbf {u}_{i}^S\) (resp. interaction embedding \(\mathbf {u}_{i}^I\) ) by performing GCN with k-layers (resp. 1-layer) based on the U-U graph (resp. U-I graph). For each item \(q_j\) , it simply obtains \(q_j\) ’s embedding \(\mathbf {v}_{j}\) based on its attributes without using a GNN encoder.
Note that different normalization schemes have been proposed in the literature to normalize the weight of each neighbor. The most common strategy is the symmetric normalization as \(1/\sqrt {|\mathcal {N}_{n_j}||\mathcal {N}_{n_i}|}\) for its simpler symmetric matrix form. One can also just use \(1/|\mathcal {N}_{n_j}|\) but it gives a lower weight to high-degree nodes as compared to the previous form, which may not be desirable. Finally, just using \(1/|\mathcal {N}_{n_i}|\) is also not typically desirable as it smooths the neighbor information without considering their degrees. We will thus use symmetric normalization unless otherwise mentioned.

5.1.2 LightGCN.

It is well-known that non-linear activation and feature transformation in GCN encoders make the propagation step very complicated for training and scalability [25, 60]. Motivated by this, some works [41, 43, 75, 88, 102, 103, 120, 127, 130] have attempted to replace their GCN encoders with LightGCN [25], that is,
\begin{equation} \mathbf {e}_{i}^{(k)} = \sum \limits _{n_j \in \mathcal {N}_{n_i}} \frac{1}{\sqrt {|\mathcal {N}_{n_j}||\mathcal {N}_{n_i}|}}\mathbf {e}_{j}^{(k-1)}. \end{equation}
(2)
It should be noted that LightGCN [25] has no non-linear activation function, no feature transformation, and no self-connection.
For instance, DcRec [103] obtains each user \(p_i\) ’s social embedding \(\mathbf {u}_{i}^S\) via GCN, whereas obtaining the \(p_i\) ’s interaction embedding \(\mathbf {u}_{i}^I\) and each item \(q_j\) ’s embedding \(\mathbf {v}_{j}\) via the LightGCN encoder.

5.1.3 GANN.

The attention mechanism in graphs originated from the graph attention network (GAT) [90] and has already been successful in many applications, including recommender systems. Considering different weights from neighbor nodes in the input graph helps focus on important adjacent nodes while filtering out noises during the propagation process [90]. Therefore, almost all existing works on SocialRS have leveraged the attention mechanism in their GNN encoders [1, 2, 5, 6, 9, 10, 10, 15, 16, 17, 18, 26, 27, 30, 31, 32, 36, 38, 40, 42, 44, 51, 52, 53, 59, 63, 65, 66, 70, 73, 80, 81, 82, 84, 89, 91, 92, 100, 101, 104, 106, 109, 110, 112, 114, 115, 117, 121, 128, 131, 134].
The common intuitions behind their design of the attention mechanism are: (1) each user’s preferences for different items may differ, and (2) each user’s influences on her social friends may differ. Based on such intuitions, many methods represent a node \(n_i\) ’s embedding in kth layer by attentively aggregating the embeddings of \(n_i\) ’s neighbors in \((k-1)\) th layer as follows:
\begin{equation} \mathbf {e}_{i}^{(k)} = \sigma \left(\sum \limits _{n_j \in \mathcal {N}_{n_i}} (\alpha _{ij} \cdot \mathbf {e}_{j}^{(k-1)}) \mathbf {W}^{(k)}\right), \end{equation}
(3)
where \(\alpha _{ij}\) indicates the attention weight of neighbor node \(n_j\) w.r.t \(n_i\) .
Now, we discuss how to compute the attention weights in existing works. Most methods, including DANSER [106] and SCGRec [117], typically use the concatenation-based graph attention as follows:
\begin{equation} \alpha _{ij} = \frac{\exp (\text{MLP}[\mathbf {e}_{i},\mathbf {e}_{j}])}{\sum \nolimits _{n_k \in \mathcal {N}_{n_i}} \exp (\text{MLP}[\mathbf {e}_{i},\mathbf {e}_{j}])}. \end{equation}
(4)
Also, other methods, including DICER [106] and MEGCN [33], use the similarity-based graph attention, which is another popular technique, that is,
\begin{equation} \alpha _{ij} = \frac{\exp (\text{sim}(\mathbf {e}_{i}, \mathbf {e}_{j}))}{\sum \nolimits _{n_k \in \mathcal {N}_{n_i}} \exp (\text{sim}(\mathbf {e}_{i}, \mathbf {e}_{k}))}, \end{equation}
(5)
where sim() denotes a similarity function such as cosine similarity and dot product.

5.1.4 HetGNN.

The user-item interactions and user-user social relations can be regarded as the users’ heterogeneous relationships, that is, a user’s preferences on items and his/her friendship. In this sense, a few methods [8, 94] have attempted to model the inputs as a heterogeneous graph and then design the HetGNN encoders for learning user and item embeddings, that is,
\begin{equation} \mathbf {e}_{i}^{(k)} = \sigma \left(\sum \limits _{n_j \in \mathcal {N}_{n_i}} \frac{1}{\sqrt {|\mathcal {N}_{n_j}||\mathcal {N}_{n_i}|}} \mathbf {e}_{j}^{(k-1)} \mathbf {W}_{v_{ij}}^{(k)}\right), \end{equation}
(6)
where \(v_{ij}\) indicates the type of relation between \(n_i\) and \(n_j\) . As a result, the HetGNN encoder employs different transformation matrices according to the relations between two nodes.
For instance, SeRec [8] defines four types of directed edges (i.e., user-user edges, user-item edges, item-user edges, and item-item edges), constructing a heterogeneous graph based on the above edges. Then, it obtains each user \(p_i\) ’s embedding \(\mathbf {u}_i\) and each item \(q_j\) ’s embedding \(\mathbf {v}_j\) via the HetGNN encoder.

5.1.5 GRNN.

The sequential behaviors of users when they interact with items reflect the evolution of their preferences of items over time. For this reason, time-aware recommender systems have attracted increasing attention in recent years [95]. Such temporal interactions are often divided into multiple user sessions and modeled as session-based SocialRS. Multiple works [9, 22, 44, 45, 67, 83, 84, 99, 111] have attempted to model dynamic user interests through session-based or temporal SocialRS. These models leverage the GRNN encoders to capture these time-evolving interests.
Suppose each user p interacts with items in a given sequence \(\mathcal {S}_p\) . Consequently, one can create a sequence of interactions for each item q as \(\mathcal {S}_q\) , consisting of users that rate item q in a temporal sequence. In general, the temporal sequence is denoted for node \(n_i\) as \(\mathcal {S}_{n_i} = \lbrace n^i_1, n^i_2, \cdots , n^i_K\rbrace\) . Note that session-based encoders would divide \(\mathcal {S}_{n_i}\) into multiple sessions \(\mathcal {S}^t_{n_i}\) and encode each session separately. The GRNN encoder for node \(n_i\) can be then generalized as:
\begin{equation} \mathbf {e}_i = {\rm\small GRNN}(\mathcal {S}_{n_i}, \mathcal {N}_{n_i}), \end{equation}
(7)
where GRNN is a combination of RNN and GNN modules. In particular, one can obtain dynamic user interests and item embeddings through a long short-term memory (LSTM) [69, 124] unit, that is,
\begin{equation} \begin{aligned}\mathbf {x}_{i}^{(k)} &= \sigma (\mathbf {W}_x[\mathbf {h}_i^{(k-1)}, \mathbf {n}_k^i] + b_x), \\ \mathbf {f}_{i}^{(k)} &= \sigma (\mathbf {W}_s[\mathbf {h}_i^{(k-1)}, \mathbf {n}_k^i] + b_s), \\ \mathbf {o}_{i}^{(k)} &= \sigma (\mathbf {W}_o[\mathbf {h}_i^{(k-1)}, \mathbf {n}_k^i] + b_o), \\ \mathbf {\tilde{c}}_{i}^{(k)} &= \tanh (\mathbf {W}_c[\mathbf {h}_j^{(k-1)}, \mathbf {n}_k^i] + b_c), \\ \mathbf {c}_{i}^{(k)} &= \mathbf {f}_{i}^{(k)} \odot \mathbf {c}_{i}^{(k-1)} + \mathbf {x}_{i}^{(k)} \odot \mathbf {\tilde{c}}_{i}^{(k)},\\ \mathbf {h}_{i}^{(k)} &= \mathbf {o}_{i}^{(k)} \odot \tanh (\mathbf {c}_{i}^{(k)}). \end{aligned} \end{equation}
(8)
Then, the node embedding \(\mathbf {e}_i\) is obtained using a GNN module such as GANN and GCN (as discussed below). In general, one can obtain
\begin{equation} \mathbf {e}_i = {\rm\small GNN}(\mathbf {h}_i^{(K)}, \lbrace \mathbf {h}_j^{(K)}:n_j\in \mathcal {N}_{n_i}\cup \lbrace n_i\rbrace \rbrace). \end{equation}
(9)
For instance, DREAM [82] obtains each user \(p_i\) ’s embedding within each session using the GRNN encoder as above. It uses a Relational GAT module for the GNN layer to aggregate information from its social neighbors. Meanwhile, item embeddings \(\mathbf {v}_j\) for item \(q_j\) is obtained using a simple embedding layer.

5.1.6 HyperGNN.

Most GNN encoders, as mentioned above, learn pairwise connectivity between two nodes. However, more-complicated connections can be captured by jointly using user-item relations with user-user edges and/or using higher-order social relations. For instance, triangular structures, including two users and their co-rated items, are a common motif. To leverage such high-order relations, some works [24, 85, 122] have attempted to model the inputs as a hypergraph and then design the HyperGNN encoders for learning user and item embeddings.
Let \(\mathcal {G}=(\mathcal {N},\mathcal {E})\) denotes a hypergraph where \(\mathcal {N}\) and \(\mathcal {E}\) indicate sets of nodes and hyperedges, respectively. Each hyperedge \(e \in \mathcal {E}\) is a subset of nodes, that is, \(e \in 2^{\mathcal {N}}\) . The node degree is thus \(d_i = \sum _{e \in \mathcal {E}: n_i \in e} {1}\) for \(\forall n_i \in \mathcal {N}\) . Then, each layer of the HyperGNN encoder learns node embeddings using relations as follows:
\begin{equation} \mathbf {h}_i^{(k)} = \sigma \left(\sum \limits _{e \in \mathcal {E}: n_i \in e} \sum \nolimits _{n_j \in e} \frac{w_{e,i,j}}{\sqrt {d_i d_j} |e|} \mathbf {h}_{j}^{(k-1)}\right), \end{equation}
(10)
where \(w_{e,i,j}\) are learnable parameters and \(d_i = \sum _{e \in \mathcal {E}: n_i \in e} {1}\) . We note that HyperGNN-based SocialRS methods [24, 122] remove non-linear activation and feature transformation as in the LightGCN encoder.
For instance, MHCN [122] designs three types of triangular motifs, constructing three incidence matrices, each representing a hypergraph induced by each motif. Then, it obtains each user \(p_i\) ’s embedding \(\mathbf {u}_i\) via the multi-type HyperGNN encoders while obtaining each item \(q_j\) ’s embedding \(\mathbf {v}_j\) via the GCN encoder.

5.1.7 Others.

Furthermore, we briefly describe the two encoders, GAE and hyperbolic GNN, each of which is employed by only one method. Liu et al. [47] pointed out that GCN is mainly suitable for semi-supervised learning tasks. On the other hand, they claimed that the goal of GAE coincides with that of the recommendation task, which is to minimize the reconstruction error of input and output [47]. For this reason, they proposed a SocialRS method, named SIGA, which employs GAE and is used for the rating prediction task.
Meanwhile, Wang et al. [93] pointed out that since existing methods usually learn the user and item embeddings in the Euclidean space, these methods fail to explore the latent hierarchical property in the data. For this reason, they proposed a SocialRS method, named HyperSoRec, which performs in the hyperbolic space because the exponential expansion of hyperbolic space helps preserve more-complex relationships between users and items [34].

5.2 Decoders

In this subsection, we group the decoders of GNN-based SocialRS into two categories: dot-product and multi-layer perceptron (MLP). Table 8 summarizes the taxonomy of these decoders.
Table 8.
DecodersModels
Dot-productDiffNet [105], DiffNet++ [104], DiffNetLG [80], MEGCN [33], ASR [32], ATGCN [74], GNN-SOR [23], DGARec [84], SAGLG [48], HIDM [40], SocialLGN [43], GMAN [42], MPSR [46], DREAM [82], SHGCN [134], SCGRec [117], PDARec [131], MHCN [122], SEPT [120], DcRec [103], DH-HGCN [24], SSDRec [115], SeRec [8], Motif-Res [85], APTE [130], EAGCN [102], HOSR [50], FeSoG [52], IGRec [10], ESRF [121], SDCRec [15], SoHRML [51], HyperSoRec [93], DSR [75] DESIGN [88], SRAN [112], IDiffNet [41], CGL [127], FBNE [5], MrAPR [81], GNNRec [45], SGHAN [101] SSRGNN [9], DISGCN [38], ME-LGN [63], SGA [53], SIGA [47], GDMSR [71], DGNN [108], DSL [97]
MLPGraphRec [16], GraphRec+ [17], DICER [18], SAN [30], KConvGraphRec [89], EGFRec [22], FuseRec [66], GNNTSR [59], GAT-NSR [65], TGRec [1], SoRecGAT [91], PA-GAN [27], GHSCF [2], HeteroGraphRec [73], GLOW [36], BFHAN [128], MGSR [67], GTN [26], MGNN [111], SR-HGNN [114], DANSER [106], MutualRec [110], GSFR [109], SOAP-VAE [92], SENGR [76], MOHCN [99], DCAN [94], TAG [70], HSGNN [100], Social-RippleNet [31], GNN-DSR [44], GDSRec [6], DGRec [83], DREAM [82]
Table 8. Taxonomy of Decoder Architectures

5.2.1 Dot-Product.

Many methods [5, 8, 9, 10, 15, 23, 24, 32, 33, 38, 40, 41, 42, 43, 45, 46, 47, 48, 50, 51, 52, 53, 63, 74, 75, 80, 81, 82, 84, 85, 88, 93, 101, 102, 103, 104, 105, 112, 115, 117, 120, 121, 122, 127, 130, 131, 133, 134] simply predict a user \(p_i\) ’s preference \(\hat{r}_{ij}\) on an item \(q_j\) via a dot product of their corresponding embeddings, that is,
\begin{equation} \hat{r}_{ij} = \mathbf {u}_i \cdot \mathbf {v}_j^\top . \end{equation}
(11)

5.2.2 MLP.

More than half of the existing methods [1, 2, 6, 16, 17, 18, 22, 26, 27, 30, 31, 36, 44, 59, 65, 66, 67, 70, 73, 76, 82, 83, 89, 91, 92, 94, 99, 100, 106, 109, 110, 111, 114, 128] predict a user \(p_i\) ’s preference \(\hat{r}_{ij}\) on an item \(q_j\) by employing MLP as follows:
\begin{equation} \hat{r}_{ij} = \sigma _{L}\left(\mathbf {W}^\top _{L}\left(\sigma _{L-1}(\ldots \sigma _{2}\left(\mathbf {W}_{2}^\top \begin{bmatrix} \mathbf {u}_{i} \\ \mathbf {v}_{j} \end{bmatrix} +\mathbf {b}_{2}) \ldots \right)\right)\right)+\mathbf {b}_{L}, \end{equation}
(12)
where \(\mathbf {W}_{i}\) , \(\mathbf {b}_{i}\) , and \(\sigma _{i}\) denote the weight matrix, bias vector, and activation function for ith layer’s perceptron, respectively.

5.3 Loss Functions

In this subsection, we first group the primary loss functions of GNN-based SocialRS into four categories: BPR [72], MSE, CE, and hinge loss. In addition, we found that some works additionally employ auxiliary loss functions. Thus, we further group these loss functions into eight categories: social link prediction (LP) loss, SSL, group-based loss, adversarial (Adv) loss, path-based loss, knowledge distillation (KD) loss, sentiment-aware loss, and policy-network-based (Policy Net) loss. Table 9 summarizes the taxonomy of loss functions used in existing work.
Table 9.
Loss FunctionsModels
Primary ObjectivesMSEGraphRec [16], GNNTSR [59], GAT-NSR [65], TGRec [1], PA-GAN [27], GHSCF [2], GraphRec+ [17], GTN [26], PDARec [131], GNN-SOR [23], DANSER [106], SAN [30], KConvGraphRec [89], HeteroGraphRec [73], GMAN [42], SR-HGNN [114], DGARec-R [84], MGSR [67], APTE [130], EAGCN [102], FeSoG [52], SENGR [76], MOHCN [99], TAG [70], Social-RippleNet [31], GNN-DSR [44], GDSRec [6]
BPRASR [32], SAGLG [48], MGNN [111], HIDM [40], SocialLGN [43], MPSR [46], SHGCN [134], MutualRec [110], ATGCN [74], DiffNet [105], DiffNet++ [104], DiffNetLG [80], MEGCN [33], GLOW [36], SCGRec [117], SEPT [120], DcRec [103], MHCN [122], DH-HGCN [24], GSFR [109], HOSR [50] IGRec [10], ESRF [121], SoHRML [51], DSR [75], SRAN [112], IDiffNet [41], CGL [127], MrAPR [81], DISGCN [38] HSGNN [100], ME-LGN [63], SGA [53], Motif-Res [85], GDMSR [71], MADM [58], DGNN [108], DSL [97]
CEDICER [18], SoRecGAT [91], DANSER [106], BFHAN [128], EGFRec [22], FuseRec [66], SeRec [8], SSDRec [115], DESIGN [88], FBNE [5], GNNRec [45], DCAN [94], SOAP-VAE [92] SGHAN [101], SSRGNN [9], GDSRec [6], SIGA [47], DGRec [83], DREAM [82]
HingeHyperSoRec [93]
Auxiliary ObjectivesSocial LPMGNN [111], MutualRec [110], SR-HGNN [114], APTE [130], SoHRML [51], FBNE [5], GDMSR [71]
SSLSocialSEPT [120], DcRec [103], MADM [58], DSL [97]
InteractionSDCRec [15], CGL [127], DISGCN [38], DCAN [94], DcRec [103]
MotifMotif-Res [85], MHCN [122]
GroupGLOW [36], GMAN [42]
AdvESRF [121]
PathSPEX [37]
KDDESIGN [88]
SentimentSENGR [76]
Policy Net.DANSER [106]
Table 9. Taxonomy of Loss Functions

5.3.1 Primary Loss Functions.

Different primary loss functions are employed depending on whether the methods focus on explicit or implicit feedback.
MSE Loss. For the methods that focus on explicit feedback (e.g., star ratings) of users, most of them [1, 2, 6, 16, 17, 23, 26, 27, 30, 31, 42, 44, 52, 59, 65, 67, 70, 73, 76, 84, 89, 99, 102, 106, 114, 130, 131] learn user and item embeddings via the MSE-based loss function \(\mathcal {L}_{MSE}\) , which is defined as follows:
\begin{equation} \mathcal {L}_{MSE}=\sum _{p_i\in \mathcal {U}}\sum _{q_j\in \mathcal {I}}(\hat{r}_{ij}-{r}_{ij})^2, \end{equation}
(13)
where \({r}_{ij}\) indicates \(p_i\) ’s real rating score on \(q_j\) . That is, the embeddings of \(p_i\) and \(q_j\) are learned, aiming at minimizing the differences between \(p_i\) ’s real and predicted scores, that is, \({r}_{ij}\) and \(\hat{r}_{ij}\) , for \(q_j\) .
BPR Loss. For the methods that focus on implicit feedback (e.g., click or browsing history) of users, most of them [10, 24, 32, 33, 36, 38, 40, 41, 43, 46, 48, 50, 51, 53, 63, 74, 75, 80, 81, 85, 100, 103, 104, 105, 109, 110, 111, 112, 117, 120, 121, 122, 127, 134] learn user and item embeddings via the BPR-based loss function \(\mathcal {L}_{BPR}\) , which is defined as follows:
\begin{equation} \mathcal {L}_{BPR}=-\sum _{p_i\in \mathcal {U}}\sum _{q_j\in \mathcal {N}_{p_i}}\sum _{q_k\in \mathcal {I}{\backslash }\mathcal {N}_{p_i}}\text{log}\sigma (\hat{r}_{ij}-\hat{r}_{ik}), \end{equation}
(14)
where \(\mathcal {U}\) and \(\mathcal {N}_{n_i}\) denote a set of users and a set of items rated by \(p_i\) , respectively. \(\hat{r}_{ij}\) and \(\hat{r}_{ik}\) indicate \(p_i\) ’s preference on the rated item \(q_j\) and the (randomly-sampled) unrated item \(q_k\) , respectively. Also, \(\sigma\) indicates the sigmoid function. That is, the embeddings of \(p_i\) , \(q_j\) , and \(q_k\) are learned based on the intuition that \(p_i\) ’s preference \(\hat{r}_{ij}\) on \(q_j\) is likely to be higher than \(p_i\) ’s preference \(\hat{r}_{ik}\) on \(q_k\) .
CE Loss. Several methods [5, 6, 8, 9, 18, 22, 37, 45, 47, 66, 82, 83, 88, 91, 92, 94, 101, 106, 115, 128, 133] for implicit feedback learn user and item embeddings via the CE-based loss function \(\mathcal {L}_{CE}\) , which is defined as follows:
\begin{equation} \mathcal {L}_{CE}=-\sum _{p_i\in \mathcal {U}}\sum _{q_j\in \mathcal {I}}{r}_{ij}\text{log}(\hat{r}_{ij}) + (1-{r}_{ij})\text{log}(1-\hat{r}_{ij}), \end{equation}
(15)
where \(\mathcal {I}\) indicates a set of items. It should be noted that \({r}_{ij}=1\) if \(q_j \in \mathcal {N}_{p_i}\) , otherwise \({r}_{ij}=0\) . That is, the embeddings of \(p_i\) and \(q_j\) are learned, aiming at maximizing \(p_i\) ’s preferences on his/her rated items while minimizing \(p_i\) ’s preferences on his/her unrated items.
Hinge Loss. A method [93] for implicit feedback learns user and item embeddings via the hinge loss function \(\mathcal {L}_{Hinge}\) , which is defined as follows:
\begin{equation} \mathcal {L}_{Hinge}=\sum _{p_i\in \mathcal {U}}\sum _{q_j\in \mathcal {N}_{p_i}}\sum _{q_k\in \mathcal {I}{\backslash }\mathcal {N}_{p_i}}\text{max}(0,\lambda +(\hat{r}_{ij})^2-(\hat{r}_{ik})^2), \end{equation}
(16)
where \(\lambda\) indicates the safety margin size. That is, the embeddings of \(p_i\) , \(q_j\) , and \(q_k\) are learned, aiming at ensuring that \(p_i\) ’s preferences on his/her rated items \(q_j\) are higher than those on his/her unrated items \(q_k\) at least by a margin of \(\lambda\) .

5.3.2 Auxiliary Loss Functions.

Here, we discuss the auxiliary loss functions used by GNN-based SocialRS methods.
Social Link Prediction (LP) Loss. It should be noted that the primary objectives of the existing works focus on reconstructing the input U-I rating graph. Along with this, papers like MGNN [111], MutualRec [110], and SR-HGNN [114] learn the BPR-based social LP loss that aims at reconstructing the input U-U social graph. Through this method, user embeddings can be informed further to reconstruct the social relations, which allows them to better capture the social network structure that is essential for the more-effective social recommendation.
Self-Supervised Loss (SSL). SSL originated in image and text domains to address the deficiency of labeled data [49]. The basic idea of SSL is to assign labels for unlabeled data and exploit them additionally in the training process. It is well-known that the data sparsity problem significantly affects the performance of recommender systems. Therefore, there has recently been a surge of interest in SSL for recommender systems [123].
Some GNN-based SocialRS methods [15, 38, 85, 94, 103, 103, 120, 122, 127] designed SSL, which is derived from U-U social and/or U-I rating graphs. In this survey, we categorized them as social SSL and interaction-based SSL depending on the graph type employed to design SSL. For the social SSL, SEPT [120] augments different views related to users with the U-U social graph and designs two socially-aware encoders that aim at reconstructing the augmented views. It adopts the regime of tri-training [132], which operates on the augmented views above for self-supervised signals. For the interaction-based SSL, SDCRec [15] samples two items among items rated by a user, which have the highest similarities to the user. Then, it additionally utilizes them as self-supervised signals.
On the other hand, Motif-Res [85] and MHCN [122] explore the motif information in graph structure so that such information can be utilized as self-supervised signals. For instance, MHCN [122] constructs multi-type hyperedges, which are instances of a set of triangular relations, and designs SSL by leveraging the hierarchy in the hypergraph structures. It aims at reflecting the user node’s local and global high-order connectivity patterns in different hypergraphs [122].
Group-based Loss. GLOW [36] and GMAN [42] make use of the user groups. Based on the group information, both methods additionally design the group-based loss. They define the group-item interaction as indicating a set of users that have interacted with an item. Then, they represent each group’s embedding by attentively aggregating the users’ embeddings within the corresponding group. Finally, the user and item embeddings are learned via a group-based loss so that each group’s preferences on items rated by users in the corresponding group are likely to be higher than those of their unrated items.
Others. We briefly discuss the other loss functions that are employed by only one method. Yu et al. [121] designed an adversarial mechanism to consider the fact that social relations are very sparse, noisy, and multi-faceted in real-world social networks. On the other hand, Li et al. [37] pointed out that existing SocialRS methods fail to distinguish social influence from social homophily. To address this limitation, they designed an auxiliary loss function that models and captures the rich information conveyed by the formation of social homophily [37]. Furthermore, Tao et al. [88] leveraged the KD technique into the social recommendation to address the overfitting problem of existing methods. Shi et al. [76] incorporated both sentiment information derived from reviews and interaction information captured by the GNN encoder. To this end, they designed an auxiliary loss function that captures different sentimental aspects of items from reviews [76]. Lastly, Wu et al. [106] designed a policy-based loss function based on a contextual multi-armed bandit [4], which dynamically weighs different social effects, that is, social homophily, social influence, item-to-item homophily, and item-to-item influence.

5.4 Model Complexity

Finally, we conduct a time complexity analysis for GNN-based SocialRS methods. In Table 10, we present a summary of the time complexity of the methods, providing values from the corresponding papers. The common notations used for the time complexity are outlined below.
Table 10.
EncodersModelsTime Complexity
User SocialUser InterestItem Encoder
GANNGANNGANNDiffNet++ [104] \(O(m(L_s+L_i)D+nL_uD)\)
SR-HGNN [114] \(O(|\mathbf {R}|d)\)
DISGCN [38] \(O(|B|d^2K+(|B|+|\mathbf {S}^{+}|+|\mathbf {R}|)dK+|\mathbf {R}|d))\)
ME-LGN [63] \(O(mKs)\)
GDSRec [6] \(O(((m+n)s+\lt Q\gt)D)\)
GraphRec+ [17] \(O(m(L_i+L_s)d+n(L_u+m_c)d)\)
ESRF [121] \(O(|\mathbf {R}|d+|\mathbf {S}|d+kmd)\)
SoHRML [51] \(O(|\mathbf {R}|+|\mathbf {S}|)(2Kd_1d_2+\sum _k=1 d_kd_{k-1}))\)
EmbSAN [30] \(O(mL_sK)\)
HSGNN [100] \(O(mKs)\)
GCNBFHAN [128] \(O(K(|\mathbf {R}|d+(m+n)(d2+d)))\)
SHGCN [134] \(O((|\mathbf {S}|+|\mathbf {E}|)d)\)
GRNNGRNNGNN-DSR [44] \(O(m(2L_i+L_s)d^2 + n(2L_u+m_c)d^2)\)
RNNRNNSGHAN [101] \(O(hMd)+O(hdK)\)
GCNGCNEmbDiffNet [105] \(O(mKL_s)\)
MEGCN [33] \(O(mk_1k_2)\)
HOSR [50] \(O(K|\mathbf {S}|d^2+|\mathbf {R}|d)\)
MLPEmbMGNN [111] \(O(m^2Kd^2)\)
LightGCNLightGCNLightGCNSocialLGN [43] \(O(m(L_s+L_i+d)d+nL_ud)\)
EAGCN [102] \(O(m+nd^2+(|\mathbf {R}|+|\mathbf {S}|)dK+|\mathbf {R}|d)\)
CGL [127] \(O(|\mathbf {R}|d(K+1)+|\mathbf {S}|dK+5Bd+B^2d)\)
SEPT [120] \(O(|\mathbf {R}|d+mlog(K))\)
DSR [75] \(O((L_S|\mathbf {S}|td + L_R|\mathbf {R}|d)+mfd^2)\)
EmbIDiffNet [41] \(O(mKL_i+nKL_u)\)
HyperGNNGANNHyperGNNMHCN [122] \(O(|\mathbf {A}+|dK)\)
HyperbolicHyperbolicHyperbolicHyperSoRec [93] \(O(cb\prod _{i=1}^K |N_i|)\)
Table 10. Comparison of Time Complexity for GNN-Based SocialRS Methods
It should be noted that only methods that discuss their complexity in the respective papers are listed.
m and n: the number of users and items, respectively;
\(|\mathbf {R}|\) , \(|\mathbf {S}|\) , and \(|\mathbf {E}|\) : number of edges in the user-item interaction, user-user social, and hypergraphs, respectively;
\(|\mathbf {S}^+|\) : number of all the friend pairs with social influence;
K and d: number of GNN layers and the embedding size, respectively;
\(L_S\) and \(L_R\) : number of layers for social and rating graphs, respectively;
\(L_s\) and \(L_i\) : average number of social and item neighbors per user, respectively;
\(L_u\) and \(m_c\) : average number of user and item neighbors per item, respectively;
s and km: number of the sampled neighbors and alternative neighbors, respectively;
t, h, and B: number of iterations, number of LSTM hidden state, and the batch size, respectively.
For the remaining model-specific notations, including M, \(k_1\) , \(k_2\) , f, c, and b, please refer to the corresponding papers. From Table 10, we observe that methods using encoders such as RNN and HetGNN, which have high complexity, do not offer detailed insights into their computational demands. On the contrary, most methods using LightGCN discuss their complexity and also substantiate their efficiency, including scalability, through experimental validation.

6 Experimental Setup

In this section, we discuss the experimental setup of GNN-based SocialRS methods. Specifically, we review 17 benchmark datasets and 8 evaluation metrics, that are widely used in GNN-based SocialRS methods. Furthermore, we compare the recommendation accuracy between GNN-based SocialRS methods across datasets.

6.1 Benchmark Datasets

We summarize the datasets widely used by existing GNN-based SocialRS methods in Table 11. These datasets come from 8 different application domains: product, location, movie, image, music, bookmark, microblog, and miscellaneous. We present the statistics of each dataset, including the numbers of users, items, ratings, and social relations, and a list of papers using the corresponding dataset. Since several versions exist per dataset, we chose the version that includes the most significant number of rating information.
Table 11.
DomainsDatasets# Users# Items# Ratings# SocialPapers Used
ProductEpinions18,088261,649764,352355,813[1, 2, 16, 17, 18, 23, 26, 27, 40, 59, 65, 92, 110, 111, 131],
[15, 51, 52, 66, 73, 84, 89, 104, 106, 109, 114, 128, 130],
[6, 37, 44, 45, 75, 82, 88, 93, 99]
Ciao7,317104,975283,319111,781[1, 2, 17, 27, 40, 43, 46, 59, 66, 73, 84, 89, 92, 114, 128],
[6, 15, 16, 18, 26, 32, 47, 51, 52, 75, 88, 93, 102, 103, 109],
[44, 81, 99]
Beidan2,8412,29835,1462,367[38]
Beibei24,82716,8641,667,320197,590[38]
LocationYelp19,53921,266405,884363,672[23, 24, 30, 32, 33, 46, 80, 85, 91, 104, 105, 120, 122, 130],
[5, 41, 50, 63, 70, 76, 81, 83, 88, 93, 99, 101, 102, 112, 127]
Dianping59,42610,224934,334813,331[103, 104]
Gowalla33,66141,2291,218,599283,778[8, 9, 39, 53, 74, 94, 101, 102, 121]
Foursquare39,30245,5953,627,093304,030[8, 9, 39, 94]
MovieMovieLens138,15916,9541,501,622487,184[5, 31, 48, 89]
Flixster58,47038,0763,619,736667,313[17, 23, 47, 51, 109, 110, 111]
FilmTrust1,5082,07135,4971,853[32, 47, 52, 65, 85, 131]
ImageFlickr8,35882,120327,815187,273[30, 33, 41, 88, 102, 104, 105, 112, 127]
MusicLast.fm1,89217,63292,83425,434[10, 43, 53, 63, 74, 89, 110, 120, 121, 122, 126]
BookmarkDelicious1,6293,450282,48212,571[8, 9, 22, 40, 44, 83, 94]
MicroblogWeibo6,81219,519157,555133,712[37]
X8,930232,849466,25996,718[37]
MiscellaneousDouban2,84839,586894,88735,770[1, 10, 22, 24, 45, 47, 50, 67, 73, 74, 83, 85, 92, 114],
[63, 82, 120, 121, 122]
Table 11. Statistics of 17 Publicly-Available Benchmark Datasets
\(^{1}\) Raw dataset is available at https://www.yelp.com/dataset/documentation/main Dataset source is hyperlinked to each dataset name: datasets colored blue contain links for both user-item interactions and user-user relations, whereas the ones in red only contain one of the two due to unavailability of the other.

6.1.1 Product-Related Datasets.

Epinions. This dataset is collected from a now-defunct consumer review site, Epinions. It contains 355.8 K trust relations from 18.0 K users and 764.3 K ratings from 18.0 K users on 261.6 K products. Here, a trust relation between two users indicates that one user trusts a review of a product written by another user. For each rating, this dataset originally provides the product name, its category, the rating score in the range [1, 5], the timestamp that a user rated on an item, and the helpfulness of this rating. Thirty-seven GNN-based SocialRS methods reviewed in this survey used this dataset [1, 2, 6, 15, 16, 17, 18, 23, 26, 27, 37, 40, 44, 45, 51, 52, 59, 65, 66, 73, 75, 82, 84, 88, 89, 92, 93, 99, 104, 106, 109, 110, 111, 114, 128, 130, 131], which means the most popular in SocialRS.
Ciao. This dataset is collected from a consumer review site in the UK, Ciao (https://www.ciao.co.uk/). It contains 111.7 K trust relations from 7.3 K users and 283.3 K ratings from 7.3 K users on 104.9 K products. The rating scale is from 1 to 5. This dataset was used from 34 GNN-based SocialRS methods reviewed in this survey [1, 2, 6, 15, 16, 17, 18, 26, 27, 32, 40, 43, 44, 46, 47, 51, 52, 59, 66, 73, 75, 81, 84, 88, 89, 92, 93, 99, 102, 103, 109, 114, 128].
Beidan. This dataset is collected from a social e-commerce platform in China, Beidan (https://www.beidian.com/), which allows users’ sharing behaviors. It includes 2.3 K social relations from 2.8 K users and 35.1 K ratings from 2.8 K users on 2.2 K products. For each social relation, Li et al [38] collected when a user’s friend clicks a link shared by the user that points to the information of a specific item. In this dataset, rating information does not provide explicit preference scores of users, rather containing implicit feedback only. This dataset was used in only one GNN-based SocialRS method [38].
Beibei. This dataset is collected from another social e-commerce platform in China, Beibei (https://www.beibei.com/). It is similar to Beidan but provides larger sizes of social relations and ratings. This dataset includes 197.5 K social relations from 24.8 K users and 1.6 M ratings from 24.8 K users on 16.8 K products. For ratings, this dataset provides users’ implicit feedback. This dataset was used in [38] only.

6.1.2 Location-Related Datasets.

Yelp. This dataset is collected from a business review site, Yelp (https://www.yelp.com/). It contains 363.6 K social relations from 19.5 K users and 405.8 K ratings from 19.5 K users on 21.2 K businesses. On Yelp, users can share their check-ins about local businesses (e.g., restaurants and home services) and express their experience through ratings in the range [0, 5]. Also, users can create social relations with other users. Each check-in contains a user, a timestamp, and a business (i.e., an item) that the user visited. Twenty-nine GNN-based SocialRS methods reviewed in this survey used this dataset [5, 23, 24, 30, 32, 33, 41, 46, 50, 63, 70, 76, 80, 81, 83, 85, 88, 91, 93, 99, 101, 102, 104, 105, 112, 120, 122, 127, 130].
Dianping. This dataset is collected from a local restaurant search and review platform in China, Dianping (https://www.dianping.com/). It contains 813.3 K social relations from 59.4 K users and 934.3 K ratings from 59.4 K users on 10.2 K restaurants. For ratings, each user can give scores in the range [1, 5]. This dataset was used in two GNN-based SocialRS methods [103, 104].
Gowalla. This dataset is collected from a location-based social networking site, Gowalla (https://www.gowalla.com/). It contains 283.7 K friendship relations from 33.6 K users and 1.2 M ratings from 33.6 K users on 41.2 K locations. On Gowalla, users can share information about their locations by check-in and make friends based on the shared information. For ratings, this dataset provides users’ implicit feedback. Nine GNN-based SocialRS methods used this dataset [8, 9, 39, 53, 74, 94, 101, 102, 121].
Foursquare. This dataset is collected from another location-based social networking site, Foursquare (https://foursquare.com/). It is similar to Gowalla but provides larger sizes of social relations and ratings. It contains 304.0 K friendship relations from 39.3 K users and 3.6 M ratings from 39.3 K users on 45.5 K locations. For ratings, this dataset provides users’ implicit feedback. This dataset was used in four GNN-based SocialRS methods [8, 9, 39, 94].

6.1.3 Movie-Related Datasets.

MovieLens. This dataset is collected from GroupLens Research (https://grouplens.org/) for the purpose of recommendation research. It contains 487.1 K social relations from 138.1 K users and 1.5 M ratings from 138.1 K users on 16.9 K movies. It should be noted that this dataset has different versions according to the size of the rating information. For the details, refer to https://grouplens.org/datasets/movielens/. Since the original MovieLens datasets do not contain users’ social relations, methods using this dataset built social relations by calculating the similarities between users. This dataset was used in four GNN-based SocialRS methods [5, 31, 48, 89].
Flixster. This dataset is collected from a movie review site, Flixster (https://www.flixster.com/). It contains 667.3 K friendship relations from 58.4 K users and 3.6 M ratings from 58.4 K users on 38.0 K movies. On Flixster, users can add other users to their friend lists and express their preferences for movies. The rating values are 10 discrete numbers in the range [0.5, 5]. We found that seven GNN-based SocialRS methods used this dataset [17, 23, 47, 51, 109, 110, 111].
FilmTrust. This dataset is collected from another (now-defunct) movie review site, FilmTrust. It is similar to Flixster but provides smaller sizes of social relations and ratings. It contains 1.8 K friendship relations from 1.5 K users and 35.4 K ratings from 1.5 K users on 2.0 K movies. The rating scale is from 1 to 5. This dataset was used in six GNN-based SocialRS methods [32, 47, 52, 65, 85, 131].

6.1.4 Image-Related Dataset.

Flickr. This dataset is collected from a who-trust-whom online image-based social sharing platform, Flickr (https://www.flickr.com/). It contains 187.2 K follow relations from 8.3 K users and 327.8 K ratings from 8.3 K users on 82.1 K images. On Flickr, users can follow other users and share their preferences for images with their followers. For ratings, this dataset provides users’ implicit feedback. Also, we found that 9 GNN-based SocialRS methods used this dataset [30, 33, 41, 88, 102, 104, 105, 112, 127].

6.1.5 Music-Related Dataset.

Last.fm. This dataset is collected from a social music platform, Lat.fm (https://www.last.fm/). It contains 25.4 K social relations from 1.8 K users and 92.8 K ratings from 1.8 K users on 17.6 K music artists. Each rating indicates that one user listened to an artist’s music, that is, implicit feedback. On Lat.fm, users can make friend relations based on their preferences for artists. This dataset was used in 11 GNN-based SocialRS methods [10, 43, 53, 63, 74, 89, 110, 120, 121, 122, 126].

6.1.6 Bookmark-Related Dataset.

Delicious. This dataset is collected from a social bookmarking system, Delicious (https://del.icio.us/). It contains 12.5 K social relations from 1.6 K users and 282.4 K ratings from 1.6 K users on 3.4 K tags. On Delicious, users can bookmark URLs (i.e., implicit feedback) and also assign a variety of semantic tags to bookmarks. Also, they can have social relations with other users having mutual bookmarks or tags. This dataset was used in 7 GNN-based SocialRS methods [8, 9, 22, 40, 44, 83, 94].

6.1.7 Microblog-Related Datasets.

Weibo. This dataset is collected from a social microblog site in China, Weibo (https://weibo.com/). It contains 133.7 K social relations from 6.8 K users and 157.5 K ratings from 6.8 K users on 19.5 K blogs. On Weibo, users can post microblogs (i.e., implicit feedback) and retweet other users’ blogs. Based on such retweeting behavior, Li et al. [37] collected social relations between users. Specifically, if a user has retweeted a microblog from another user, a social relation between the two users is created. This dataset was used in only one GNN-based SocialRS method [37].
X. This dataset is collected from another social microblog site, X (https://twitter.com/). It is similar to Weibo and contains 96.7 K social relations from 8.3 K users and 466.2 K ratings from 8.9 K users on 232.8 K blogs. Li et al. [37] collected social relations between two users if a user retweets or replies to a tweet from another user. This dataset was used in [37] only.

6.1.8 Miscellaneous.

Douban. This dataset is collected from a social platform in China, Douban (https://douban.com/). It contains 35.7 K social relations from 2.8 K users and 894.8 K ratings from 2.8 K users on 39.5 K items of different categories (e.g., books, movies, movies, and so on). For ratings, this dataset provides users’ implicit feedback. This dataset was used in 19 GNN-based SocialRS methods [1, 10, 22, 24, 45, 47, 50, 63, 67, 73, 74, 82, 83, 85, 92, 114, 120, 121, 122]. However, it should be noted that most methods using this dataset split users’ ratings according to the item categories and then use those of some categories only, for example, Douban-Movie and Douban-Book.

6.2 Evaluation Metrics

6.2.1 Rating Prediction Task.

The methods that focus on explicit feedback aim to minimize the errors of the rating prediction task. To evaluate the performance of this task, they use the following metrics: root mean squared error (RMSE) and mean absolute error (MAE). Specifically, MAE calculates the average error, the difference between the predicted and actual ratings, while RMSE emphasizes larger errors. Both metrics are computed as follows:
\begin{equation} \begin{aligned}MAE &= \frac{1}{M}\sum _{p_i\in \mathcal {U}}\sum _{q_j\in \mathcal {N}_{p_i}}|\hat{r}_{ij}-{r}_{ij}|,\\ RMSE &= \sqrt {\frac{1}{M}\sum \nolimits _{p_i\in \mathcal {U}}\sum \nolimits _{q_j\in \mathcal {N}_{p_i}}(\hat{r}_{ij}-{r}_{ij})^2}, \end{aligned} \end{equation}
(17)
where M indicates the number of ratings. Also, \(\mathcal {U}\) and \(\mathcal {N}_{p_i}\) denote a set of users and a set of items rated by \(p_i\) , respectively. Lastly, \({r}_{ij}\) and \(\hat{r}_{ij}\) indicate a user \(p_i\) ’s actual and predicted ratings on an item \(q_j\) , respectively.

6.2.2 Top-N Recommendation Task.

The methods for implicit feedback aim to improve the accuracy of the top-N recommendation task. To evaluate the performance of this task, they use the following metrics: normalized discounted cumulative gain (NDCG) [29], mean reciprocal rank (MRR) [3], area under the ROC curve (AUC), F1 score, precision, recall, and hit rate (HR).
First, NDCG reflects the importance of ranked positions of items in a set \(\mathcal {R}_{p_i}\) of N items that each method recommends to a user \(p_i\) . Let \(y_{k}\) represent a binary variable for kth item \(i_{k}\) in \(\mathcal {R}_{p_i}\) , that is, \(y_{k}\in {\lbrace 0,1\rbrace }\) . \(y_{k}\) is set as 1 if \(i_{k}\in \mathcal {R}_{p_i}\) and set as 0 otherwise. \(\mathcal {N}_{p_i}\) denotes a set of items considered relevant to \(p_i\) (i.e., ground truth). In this case, \(\text{NDCG}_{p_i}@N\) is computed by:
\begin{equation} \begin{aligned}\text{NDCG}_{p_i}@N & = \dfrac{\text{DCG}_{p_i}@N}{\text{IDCG}_{p_i}@N}, \\ \text{DCG}_{p_i}@N & = \sum _{k=1}^{N}\dfrac{2^{y_{k}}-1}{\log _{2}{(k+1)}}, \end{aligned} \end{equation}
(18)
where \(\text{IDCG}_{p_i}@N\) is the ideal DCG at N, that is, for the top-N items \(i_{k} \in \mathcal {N}_{p_i}\) , \(y_{k}\) is set as 1.
Second, MRR reflects the average inversed rankings of the first relevant item \(i_{k}\) in \(\mathcal {R}_{p_i}\) . \(\text{MRR}_{p_i}@N\) is computed by:
\begin{equation} \text{MRR}_{p_i}@N = \dfrac{1}{\text{rank}_{p_i}}, \end{equation}
(19)
where \(\text{rank}_{p_i}\) refers to the rank position of the first relevant item in \(\mathcal {R}_{p_i}\) .
Third, AUC evaluates whether each method ranks a rated item higher than an unrated item. That is, AUC \(_{p_i}\) is computed by:
\begin{equation} \text{AUC}_{p_i} = \dfrac{\sum _{q_j\in \mathcal {N}_{p_i}}\sum _{q_k\in \mathcal {N}_{p_i}{\backslash }{\mathcal {I}}}I(\hat{r}_{ij}\gt \hat{r}_{ik})}{\vert \mathcal {N}_{p_i} \vert \vert \mathcal {N}_{p_i}{\backslash }{\mathcal {I}} \vert }, \end{equation}
(20)
where \(I(\cdot)\) is the indicator function.
F1 score measures a harmonic mean of the precision and recall of the predictions as:
\begin{equation} \text{F1}_{p_i}@N = 2 \cdot \dfrac{\text{Precision}_{p_i}@N \cdot \text{Recall}_{p_i}@N}{\text{Precision}_{p_i}@N+\text{Recall}_{p_i}@N}, \end{equation}
(21)
\begin{equation} \begin{aligned}\text{Precision}_{p_i}@N & = \dfrac{\vert \mathcal {N}_{p_i}\bigcap \mathcal {R}_{p_i} \vert }{\vert \mathcal {R}_{p_i} \vert }, \\ \text{Recall}_{p_i}@N & = \dfrac{\vert \mathcal {N}_{p_i}\bigcap \mathcal {R}_{p_i} \vert }{\vert \mathcal {N}_{p_i} \vert }, \end{aligned} \end{equation}
(22)
where \(\text{Precision}_{p_i}@N\) and \(\text{Recall}_{p_i}@N\) denote precision and recall at N, respectively.
Finally, HR is simply the fraction of users for which the ground truth is included in each \(\mathcal {R}_{p_i}\) :
\begin{equation} \text{HR}@N = \frac{\sum _{p_i\in \mathcal {U}} hit_{p_i}}{m}, \end{equation}
(23)
where m indicates the number of users. Also, \(hit_{p_i}\) is assigned 1 if \(\mathcal {R}_{p_i}\) contains any of the ground truth of \(p_i\) , and 0 otherwise.
Conclusion. These metrics thus give complementary insights. While NDCG measures the rank-discounted cumulative gain of the recommendations relative to the ideal gain, MRR finds the predicted rank of the most relevant item for each user. Thus, MRR is focused on the rank of only the first relevant item but NDCG can incorporate the relevance of all items in the ranked list. AUC, F1, Precision, and Recall metrics are rank-free classification metrics that distinguish the prediction of a ranked and an unranked item. While precision measures the proportion of predicted items that are relevant, recall measures the proportion of relevant items that are predicted. Finally, the HR measures how many users got their ground truth items predicted in the ranked list.

6.3 Experimental Results

It should be noted that GNN-based SocialRS methods used different sets of datasets for experimentation and had different experimental settings, such as training/test ratio, top-k values, and metrics. For a fair comparison, we selected one dataset for each domain and compared the accuracy values of methods with the same settings on that dataset.
Table 12 shows the results on Epinions (Product), Yelp (Location), Flixster (Movie), Flickr (Image), Last.fm (Music), Delicious (Bookmark), and Douban (Miscellaneous). To summarize, we did not find evidence of GNN encoders being optimized for a specific domain. Therefore, the best performer will vary depending on the domain and metric. However, it is worth noting that on the Douban dataset, many SocialRS methods are based on HyperGNN. This trend may be due to the fact that the Douban dataset contains different types of user behavior for different types of items. By employing a HyperGNN encoder, the authors might attempt to accurately capture various motifs.
Table 12.
Table 12. Comparison of Recommendation Accuracy for GNN-Based Socialrs Methods

7 Future Directions

In this section, we discuss the limitations of GNN-based SocialRS methods and present several future research directions.

7.1 Graph Augmentation in GNN-Based SocialRS

An intrinsic challenge of GNN-based SocialRS methods lies in the sparsity of the input data (that is, user-item interactions and user-user relations). To mitigate this problem, some GNN-based SocialRS methods [15, 38, 85, 94, 103, 103, 120, 122, 127] have explored more supervision signals from the input data so that such signals can be utilized as different views from the original graph structure. Although many graph augmentation techniques [13, 129] such as node/edge deletion and graph rewiring have been proposed recently in a machine learning area, existing GNN-based SocialRS methods only focus on adding edges between two users or between a user and an item [15, 38, 85, 94, 103, 103, 120, 122, 127]. Therefore, it is a promising direction to leverage extra self-supervision signals based on various augmentation techniques to learn user and item embeddings more efficiently and effectively.

7.2 Trustworthy GNN-Based SocialRS

Existing GNN-based SocialRS methods have focused on improving their accuracy by only relying on users’ past feedback. However, it is worth mentioning that there are other important “beyond accuracy” metrics, which we call trustworthiness1 according to [125]. Motivated by the importance of such metrics, various trustworthy GNN architectures have been proposed to incorporate core aspects of trustworthiness, including robustness, explainability, privacy, and fairness, in the context of GNN encoders [125]. One GNN-based SocialRS method is proposed in this direction to specifically address the privacy issue [52]. In particular, Liu et al. [52] devised a framework that stores user privacy data only in local devices individually and analyzes them together via federated learning. Thus, developing trustworthy GNN-based SocialRS is a wide open for research. For example, consider robustness: bad actors may want to target certain products to certain users in a SocialRS setting; how robust would existing GNN-based SocialRS be against such attackers is an unanswered question and opens opportunities to create accurate as well as robust models.

7.3 Heterogeneity

In real-world graphs, nodes and their interactions are often multi-typed. Such graphs, which are called heterogeneous graphs, convey rich information such as heterogeneous attributes, meta-path structures, and temporal properties. Although HetGNN encoders have recently attracted attention in many domains (e.g., healthcare and cybersecurity) [98], there have been only a few attempts to leverage such heterogeneity in SocialRS [8, 94]. Therefore, designing a HetGNN-based SocialRS method remains an open question for the future.

7.4 Efficiency and Scalability

Most real-world graphs are too large and also grow rapidly. However, most GNN-based SocialRS methods are too complicated, thus facing difficulty scaling to such large-scale graphs. Some works have attempted to make more scalable versions of models, including SocialLGN [43], SEPT [120], and DcRec [103], have attempted to remove the non-linear activation function, feature transformation, and self-connection, whereas Tao et al. [88] leveraged the KD technique into SocialRS. However, designing a highly scalable GNN architecture is an important problem that remains challenging to date.

8 Conclusions

Although there has been a surge of papers on developing GNN-based social recommendation methods, no survey paper existed that reviewed them thoroughly. Our work is the first systematic and comprehensive survey that studies 84 papers on GNN-based SocialRS, collected by following the PRISMA guidelines. We present a novel taxonomy of inputs and architectures for GNN-based SocialRS, thus, categorizing different methods developed over the years in this important topic. Through this survey, we hope to enable the researchers of this field to better position their works in the recent trend while forming a gateway for the new researchers to get introduced to this important and hot topic. We hope this survey helps readers to grasp recent trends in SocialRS and develop novel GNN-based SocialRS methods.

Footnote

1
“Trustworthy” is defined in the Oxford Dictionary as follows: an object or a person that you can rely on to be good, honest, sincere, etc [125].

References

[1]
Ting Bai, Youjie Zhang, Bin Wu, and Jian-Yun Nie. 2020. Temporal graph neural networks for social recommendation. In Proceedings of the IEEE International Conference on Big Data. 898–903.
[2]
Zhongqin Bi, Lina Jing, Meijing Shan, Shuming Dou, and Shiyang Wang. 2021. Hierarchical social recommendation model based on a graph neural network. Wireless Communications and Mobile Computing 2021 (2021), 1–10.
[3]
John S. Breese, David Heckerman, and Carl Myers Kadie. 1998. Empirical analysis of predictive algorithms for collaborative filtering. In Proceedings of the Conference on Uncertainty in Artificial Intelligence. 43–52.
[4]
Sébastien Bubeck and Nicolò Cesa-Bianchi. 2012. Regret analysis of stochastic and nonstochastic multi-armed bandit problems. Foundations and Trends in Machine Learning 5, 1 (2012), 1–122.
[5]
Hongxu Chen, Hongzhi Yin, Tong Chen, Weiqing Wang, Xue Li, and Xia Hu. 2022. Social boosted recommendation with folded bipartite network embedding. IEEE Transactions on Knowledge and Data Engineering 34, 2 (2022), 914–926. DOI:
[6]
Jiajia Chen, Xin Xin, Xianfeng Liang, Xiangnan He, and Jun Liu. 2022. GDSRec: Graph-based decentralized collaborative filtering for social recommendation. IEEE Transactions on Knowledge and Data Engineering 35, 5 (2023), 4813–4824.
[7]
Rui Chen, Qingyi Hua, Yan-shuo Chang, Bo Wang, Lei Zhang, and Xiangjie Kong. 2018. A survey of collaborative filtering-based recommender systems: From traditional methods to hybrid methods based on social networks. IEEE Access 6 (2018), 64301–64320.
[8]
Tianwen Chen and Raymond Chi-Wing Wong. 2021. An efficient and effective framework for session-based social recommendation. In Proceedings of the ACM International Conference on Web Search and Data Mining. 400–408.
[9]
Yan Chen, Wanhui Qian, Dongqin Liu, Mengdi Zhou, Yipeng Su, Jizhong Han, and Ruixuan Li. 2022. Your social circle affects your interests: Social influence enhanced session-based recommendation. In Proceedings of the International Conference on Computational Science. 549–562.
[10]
Yujin Chen, Jing Wang, Zhihao Wu, and Youfang Lin. 2022. Integrating user-group relationships under interestsimilarity constraints for social recommendation. Knowledge-Based Systems 249 (2022), 108921.
[11]
ShuiGuang Deng, Longtao Huang, Guandong Xu, Xindong Wu, and Zhaohui Wu. 2017. On deep learning for trust-aware recommendations in social networks. IEEE Transactions on Neural Networks and Learning Systems 28, 5 (2017), 1164–1177.
[12]
Yue Deng. 2022. Recommender systems based on graph embedding techniques: A review. IEEE Access 10 (2022), 51587–51633.
[13]
Kaize Ding, Zhe Xu, Hanghang Tong, and Huan Liu. 2022. Data augmentation for deep graph learning: A survey. ACM SIGKDD Explorations 24, 2 (2022), 61–77.
[14]
Yingtong Dou, Hao Yang, and Xiaolong Deng. 2016. A survey of collaborative filtering algorithms for social recommender systems. In Proceedings of the International Conference on Semantics, Knowledge and Grids. 40–46.
[15]
Jing Du, Zesheng Ye, Lina Yao, Bin Guo, and Zhiwen Yu. 2022. Socially-aware dual contrastive learning for cold-start recommendation. In Proceedings of the International ACM SIGIR Conference on Research and Development in Information Retrieval. 1927–1932.
[16]
Wenqi Fan, Yao Ma, Qing Li, Yuan He, Eric Zhao, Jiliang Tang, and Dawei Yin. 2019. Graph neural networks for social recommendation. In Proceedings of the ACM Web Conference (WWW). 417–426.
[17]
Wenqi Fan, Yao Ma, Qing Li, Jianping Wang, Guoyong Cai, Jiliang Tang, and Dawei Yin. 2022. A graph neural network framework for social recommendations. IEEE Transactions on Knowledge and Data Engineering 34, 5 (2022), 2033–2047. DOI:
[18]
Bairan Fu, Wenming Zhang, Guangneng Hu, Xinyu Dai, Shujian Huang, and Jiajun Chen. 2021. Dual side deep context-aware modulation for social recommendation. In Proceedings of the ACM Web Conference (WWW). 2524–2534.
[19]
Chen Gao, Yu Zheng, Nian Li, Yinfeng Li, Yingrong Qin, Jinghua Piao, Yuhan Quan, Jianxin Chang, Depeng Jin, Xiangnan He, and Yong Li. 2023. A survey of graph neural networks for recommender systems: Challenges, methods, and directions. ACM Transactions on Recommender Systems 1, 1 (2023), 1–51.
[20]
Francisco García-Sánchez, Ricardo Colomo Palacios, and Rafael Valencia-García. 2020. A social-semantic recommender system for advertisements. Information Processing and Management 57, 2 (2020), 102153.
[21]
Fabio Gasparetti, Giuseppe Sansonetti, and Alessandro Micarelli. 2021. Community detection in social recommender systems: A survey. Applied Intelligence 51, 6 (2021), 3975–3995.
[22]
Pan Gu, Yuqiang Han, Wei Gao, Guandong Xu, and Jian Wu. 2021. Enhancing session-based social recommendation through item graph embedding and contextual friendship modeling. Neurocomputing 419 (2021), 190–202.
[23]
Zhiwei Guo and Heng Wang. 2020. A deep graph neural network-based mechanism for social recommendations. IEEE Transactions on Industrial Informatics 17, 4 (2020), 2776–2783.
[24]
Jiadi Han, Qian Tao, Yufei Tang, and Yuhan Xia. 2022. DH-HGCN: Dual homogeneity hypergraph convolutional network for multiple social recommendations. In Proceedings of the International ACM SIGIR Conference on Research and Development in Information Retrieval. 2190–2194.
[25]
Xiangnan He, Kuan Deng, Xiang Wang, Yan Li, Yong-Dong Zhang, and Meng Wang. 2020. LightGCN: Simplifying and powering graph convolution network for recommendation. In Proceedings of the International ACM SIGIR Conference on Research and Development in Information Retrieval. 639–648.
[26]
Thi Linh Hoang, Tuan Dung Pham, and Viet Cuong Ta. 2021. Improving graph convolutional networks with transformer layer in social-based items recommendation. In Proceedings of the International Conference on Knowledge and Systems Engineering. 1–6.
[27]
Liyang Hou, Wenping Kong, Yali Gao, Yang Chen, and Xiaoyong Li. 2021. PA-GAN: Graph attention network for preference-aware social recommendation. In Proceedings of the Journal of Physics: Conference Series, Vol. 1848. 012141.
[28]
Mohsen Jamali and Martin Ester. 2010. A matrix factorization technique with trust propagation for recommendation in social networks. In Proceedings of the ACM Conference on Recommender Systems. 135–142.
[29]
Kalervo Järvelin and Jaana Kekäläinen. 2000. IR evaluation methods for retrieving highly relevant documents. In Proceedings of the International ACM SIGIR Conference on Research and Development in Information Retrieval. 41–48.
[30]
Nan Jiang, Li Gao, Fuxian Duan, Jie Wen, Tao Wan, and Honglong Chen. 2021. SAN: Attention-based social aggre- gation neural networks for recommendation system. International Journal of Intelligent Systems 37, 6 (2021), 3373–3393.
[31]
Wenbo Jiang and Yanrui Sun. 2023. Social-RippleNet: Jointly modeling of ripple net and social information for recommendation. Applied Intelligence 53, 3 (2023), 3472–3487.
[32]
Yanbin Jiang, Huifang Ma, Yuhang Liu, Zhixin Li, and Liang Chang. 2021. Enhancing social recommendation via two-level graph attentional networks. Neurocomputing 449 (2021), 71–84.
[33]
Bo Jin, Ke Cheng, Liang Zhang, Yanjie Fu, Minghao Yin, and Lu Jiang. 2020. Partial relationship aware influence diffusion via a multi-channel encoding scheme for social recommendation. In Proceedings of the ACM International Conference on Information & Knowledge Management. 585–594.
[34]
Dmitri V. Krioukov, Fragkiskos Papadopoulos, Maksim Kitsak, Amin Vahdat, and Marián Boguñá. 2010. Hyperbolic geometry of complex networks. Physical Review E 82, 036106 (2010), 1–18.
[35]
Adit Krishnan, Hari Cheruvu, Cheng Tao, and Hari Sundaram. 2019. A modular adversarial approach to social recommendation. In Proceedings of the ACM International Conference on Information & Knowledge Management. 1753–1762.
[36]
Youfang Leng and Li Yu. 2022. Incorporating global and local social networks for group recommendations. Pattern Recognition 127 (2022), 108601.
[37]
Hui Li, Lianyun Li, Guipeng Xv, Chen Lin, Ke Li, and Bingchuan Jiang. 2021. SPEX: A generic framework for enhancing neural social recommendation. ACM Transactions on Information Systems 40, 2 (2021), 1–33.
[38]
Nian Li, Chen Gao, Depeng Jin, and Qingmin Liao. 2023. Disentangled modeling of social homophily and influence for social recommendation. IEEE Transactions on Knowledge and Data Engineering 35, 6 (2023), 5738–5751.
[39]
Quan Li, Xinhua Xu, Xinghong Liu, and CHEN Qi. 2022. An attention-based spatiotemporal GGNN for next POI recommendation. IEEE Access 10 (2022), 26471–26480.
[40]
Yuan Li and Kedian Mu. 2020. Heterogeneous information diffusion model for social recommendation. In Proceedings of the IEEE International Conference on Tools with Artificial Intelligence. 184–191.
[41]
Yuqiang Li, Zhilong Zhan, Huan Li, and Chun Liu. 2022. Interest-aware influence diffusion model for social recommendation. Journal of Intelligent Information Systems 58, 2 (2022), 363–377.
[42]
Guoqiong Liao, Xiaobin Deng, Changxuan Wan, and Xiping Liu. 2022. Group event recommendation based on graph multi-head attention network combining explicit and implicit information. Information Processing & Management 59, 2 (2022), 102797.
[43]
Jie Liao, Wei Zhou, Fengji Luo, Junhao Wen, Min Gao, Xiuhua Li, and Jun Zeng. 2022. SocialLGN: Light graph convolution network for social recommendation. Information Sciences 589 (2022), 595–607.
[44]
Junfa Lin, Siyuan Chen, and Jiahai Wang. 2022. Graph neural networks with dynamic and static representations for social recommendation. In Proceedings of the International Conference on Database Systems for Advanced Applications. 264–271.
[45]
Chun Liu, Yuxiang Li, Hong Lin, and Chaojie Zhang. 2023. GNNRec: Gated graph neural network for session-based social recommendation model. Journal of Intelligent Information Systems 60, 1 (2023), 137–156.
[46]
Hai Liu, Chao Zheng, Duantengchuan Li, Zhaoli Zhang, Ke Lin, Xiaoxuan Shen, Neal N. Xiong, and Jiazhang Wang. 2022. Multi-perspective social recommendation method with graph representation learning. Neurocomputing 468 (2022), 469–481.
[47]
Jinxin Liu, Yingyuan Xiao, Wenguang Zheng, and Ching-Hsien Hsu. 2023. SIGA: Social influence modeling integrating graph autoencoder for rating prediction. Applied Intelligence 53, 6 (2023), 6432–6447.
[48]
Shenghao Liu, Bang Wang, Xianjun Deng, and Laurence T. Yang. 2021. Self-attentive graph convolution network with latent group mining and collaborative filtering for personalized recommendation. IEEE Transactions on Network Science and Engineering 9, 5 (2021), 3212–3221.
[49]
Xiao Liu, Fanjin Zhang, Zhenyu Hou, Zhaoyu Wang, Li Mian, Jing Zhang, and Jie Tang. 2023. Self-supervised Learning: Generative or contrastive. IEEE Transactions on Knowledge and Data Engineering 35, 1 (2023), 857–876.
[50]
Yang Liu, Liang Chen, Xiangnan He, Jiaying Peng, Zibin Zheng, and Jie Tang. 2022. Modelling high-order social relations for item recommendation. IEEE Transactions on Knowledge and Data Engineering 34, 9 (2022), 4385–4397. DOI:
[51]
Zhen Liu, Xiaodong Wang, Ying Ma, and Xinxin Yang. 2022. Relational metric learning with high-order neighborhood interactions for social recommendation. Knowledge and Information Systems 64, 6 (2022), 1525–1547.
[52]
Zhiwei Liu, Liangwei Yang, Ziwei Fan, Hao Peng, and Philip S. Yu. 2022. Federated social recommendation with graph neural network. ACM Transactions on Intelligent Systems and Technology 13, 4 (2022), 1–24.
[53]
Yuanwei Liufu and Hong Shen. 2021. Social recommendation via graph attentive aggregation. In Proceedings of the International Conference on Parallel and Distributed Computing: Applications and Technologies. 369–382.
[54]
Hao Ma, Irwin King, and Michael R. Lyu. 2009. Learning to recommend with social trust ensemble. In Proceedings of the International ACM SIGIR Conference on Research and Development in Information Retrieval. 203–210.
[55]
Hao Ma, Michael R. Lyu, and Irwin King. 2009. In Proceedings of the ACM Conference on Recommender Systems. 189–196.
[56]
Hao Ma, Haixuan Yang, Michael R. Lyu, and Irwin King. 2008. SoRec: Social recommendation using probabilistic matrix factorization. In Proceedings of the ACM International Conference on Information & Knowledge Management. 931–940.
[57]
Hao Ma, Dengyong Zhou, Chao Liu, Michael R. Lyu, and Irwin King. 2011. Recommender systems with social regularization. In Proceedings of the ACM International Conference on Web Search and Data Mining. 287–296.
[58]
Wenze Ma, Yuexian Wang, Yanmin Zhu, Zhaobo Wang, Mengyuan Jing, Xuhao Zhao, Jiadi Yu, and Feilong Tang. 2024. MADM: A model-agnostic denoising module for graph-based social recommendation. In Proceedings of the 17th ACM International Conference on Web Search and Data Mining. 501–509.
[59]
Supriyo Mandal and Abyayananda Maiti. 2021. Graph neural networks for heterogeneous trust based social recommendation. In Proceedings of the IEEE International Joint Conference on Neural Networks. 1–8.
[60]
Kelong Mao, Jieming Zhu, Xi Xiao, Biao Lu, Zhaowei Wang, and Xiuqiang He. 2021. UltraGCN: Ultra simplification of graph convolutional networks for recommendation. In Proceedings of the ACM International Conference on Information & Knowledge Management. 1253–1262.
[61]
Peter V. Marsden and Noah E. Friedkin. 1993. Network studies of social influence. Sociological Methods & Research 22, 1 (1993), 127–151.
[62]
Miller McPherson, Lynn Smith-Lovin, and James M. Cook. 2001. Birds of a feather: Homophily in social networks. Annual Review of Sociology 27 (2001), 415–444.
[63]
Hang Miao, Anchen Li, and Bo Yang. 2022. Meta-path enhanced lightweight graph neural network for social recommendation. In Proceedings of the International Conference on Database Systems for Advanced Applications. 134–149.
[64]
David Moher, Alessandro Liberati, Jennifer Tetzlaff, Douglas G. Altman, and PRISMA Group*. 2009. Preferred reporting items for systematic reviews and meta-analyses: The PRISMA statement. Annals of Internal Medicine 151, 4 (2009), 264–269.
[65]
Nan Mu, Daren Zha, Yuanye He, and Zhihao Tang. 2019. Graph attention networks for neural social recommendation. In Proceedings of the IEEE International Conference on Tools with Artificial Intelligence. 1320–1327.
[66]
Kanika Narang, Yitong Song, Alexander Schwing, and Hari Sundaram. 2021. FuseRec: Fusing user and item homophily modeling with temporal recommender systems. Data Mining and Knowledge Discovery 35, 3 (2021), 837–862.
[67]
Yong Niu, Xing Xing, Mindong Xin, Qiuyang Han, and Zhichun Jia. 2021. Multi-preference social recommendation of users based on graph neural network. In Proceedings of the International Conference on Intelligent Computing, Automation and Applications. 190–194.
[68]
Alexis Papadimitriou, Panagiotis Symeonidis, and Yannis Manolopoulos. 2012. A generalized taxonomy of explanations styles for traditional and social recommender systems. Data Mining and Knowledge Discovery 24, 3 (2012), 555–583.
[69]
Nanyun Peng, Hoifung Poon, Chris Quirk, Kristina Toutanova, and Wen-tau Yih. 2017. Cross-sentence n-ary relation extraction with graph lstms. Transactions of the Association for Computational Linguistics 5 (2017), 101–115.
[70]
Pengpeng Qiao, Zhiwei Zhang, Zhetao Li, Yuanxing Zhang, Kaigui Bian, Yanzhou Li, and Guoren Wang. 2023. TAG: Joint triple-hierarchical attention and GCN for review-based social recommender system. IEEE Transactions on Knowledge and Data Engineering 35, 10 (2023), 9904–9919.
[71]
Yuhan Quan, Jingtao Ding, Chen Gao, Lingling Yi, Depeng Jin, and Yong Li. 2023. Robust preference-guided denoising for graph based social recommendation. In Proceedings of the ACM Web Conference 2023, WWW 2023. 1097–1108.
[72]
Steffen Rendle, Christoph Freudenthaler, Zeno Gantner, and Lars Schmidt-Thieme. 2009. BPR: Bayesian personalized ranking from implicit feedback. In Proceedings of the Conference on Uncertainty in Artificial Intelligence. 452–461.
[73]
Amirreza Salamat, Xiao Luo, and Ali Jafari. 2021. HeteroGraphRec: A heterogeneous graph-based neural networks for social recommendations. Knowledge-Based Systems 217 (2021), 106817.
[74]
Dewen Seng, Binquan Li, Chenxuan Lai, and Jiayi Wang. 2021. Adaptive learning user implicit trust behavior based on graph convolution network. IEEE Access 9 (2021), 108363–108372.
[75]
Xiao Sha, Zhu Sun, and Jie Zhang. 2021. Disentangling multi-facet social relations for recommendation. IEEE Transactions on Computational Social Systems 9, 3 (2021), 867–878.
[76]
Liye Shi, Wen Wu, Wang Guo, Wenxin Hu, Jiayi Chen, Wei Zheng, and Liang He. 2022. SENGR: Sentiment-enhanced neural graph recommender. Information Sciences 589 (2022), 655–669.
[77]
Jyoti Shokeen and Chhavi Rana. 2018. A review on the dynamics of social recommender systems. International Journal of Web Engineering and Technology 13, 3 (2018), 255–276.
[78]
Jyoti Shokeen and Chhavi Rana. 2020. Social recommender systems: Techniques, domains, metrics, datasets and future scope. Journal of Intelligent Information Systems 54, 3 (2020), 633–667.
[79]
Jyoti Shokeen and Chhavi Rana. 2020. A study on features of social recommender systems. Artificial Intelligence Review 53, 2 (2020), 965–988.
[80]
Changhao Song, Bo Wang, Qinxue Jiang, Yehua Zhang, Ruifang He, and Yuexian Hou. 2021. Social recommendation with implicit social influence. In Proceedings of the International ACM SIGIR Conference on Research and Development in Information Retrieval. 1788–1792.
[81]
Hongtao Song, Feng Wang, Zhiqiang Ma, and Qilong Han. 2022. Multirelationship aware personalized recommendation model. In Proceedings of the International Conference of Pioneering Computer Scientists, Engineers and Educators. 123–136.
[82]
Liqiang Song, Ye Bi, Mengqiu Yao, Zhenyu Wu, Jianming Wang, and Jing Xiao. 2020. Dream: A dynamic relation-aware model for social recommendation. In Proceedings of the ACM International Conference on Information & Knowledge Management. 2225–2228.
[83]
Weiping Song, Zhiping Xiao, Yifan Wang, Laurent Charlin, Ming Zhang, and Jian Tang. 2019. Session-based social recommendation via dynamic graph attention networks. In Proceedings of the ACM International Conference on Web Search and Data Mining. 555–563.
[84]
Hongji Sun, Lili Lin, and Riqing Chen. 2020. Social recommendation based on graph neural networks. In Proceedings of the IEEE International Conference on Parallel & Distributed Processing with Applications, Big Data & Cloud Computing, Sustainable Computing & Communications, Social Computing & Networking. 489–496.
[85]
Yundong Sun, Dongjie Zhu, Haiwen Du, and Zhaoshuo Tian. 2022. Motifs-based recommender system via hyper-graph convolution and contrastive learning. Neurocomputing 512 (2022), 323–338.
[86]
Jiliang Tang, Xia Hu, Huiji Gao, and Huan Liu. 2013. Exploiting local and global social context for recommendation. In Proceedings of the International Joint Conference on Artificial Intelligence. 2712–2718.
[87]
Jiliang Tang, Xia Hu, and Huan Liu. 2013. Social recommendation: A review. Social Network Analysis and Mining 3, 4 (2013), 1113–1133.
[88]
Ye Tao, Ying Li, Su Zhang, Zhirong Hou, and Zhonghai Wu. 2022. Revisiting graph based social recommendation: A distillation enhanced social graph network. In Proceedings of the ACM Web Conference (WWW). 2830–2838.
[89]
Dong Nguyen Tien and Hai Pham Van. 2020. Graph neural network combined knowledge graph for recommendation system. In Proceedings of the International Conference on Computational Data and Social Networks. 59–70.
[90]
Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. 2018. Graph attention networks. In Proceedings of the International Conference on Learning Representations.
[91]
M. Vijaikumar, Shirish Shevade, and M. Narasimha Murty. 2019. SoRecGAT: Leveraging graph attention mechanism for top-N social recommendation. In Proceedings of the Joint European Conference on Machine Learning and Knowledge Discovery in Databases. 430–446.
[92]
Joojo Walker, Fengli Zhang, Fan Zhou, and Ting Zhong. 2021. Social-trust-aware variational recommendation. International Journal of Intelligent Systems 37, 4 (2021), 2774–2802.
[93]
Hao Wang, Defu Lian, Hanghang Tong, Qi Liu, Zhenya Huang, and Enhong Chen. 2021. Hypersorec: Exploiting hyperbolic user and item representations with multiple aspects for social-aware recommendation. ACM Transactions on Information Systems 40, 2 (2021), 1–28.
[94]
Liuyin Wang, Xianghong Xu, Kai Ouyang, Huanzhong Duan, Yanxiong Lu, and Hai-Tao Zheng. 2022. Self-supervised dual-channel attentive network for session-based social recommendation. In Proceedings of the IEEE International Conference on Data Engineering. 2034–2045.
[95]
Shoujin Wang, Longbing Cao, Yan Wang, Quan Z. Sheng, Mehmet A. Orgun, and Defu Lian. 2021. A survey on session-based recommender systems. ACM Computing Surveys 54, 7 (2021), 1–38.
[96]
Shoujin Wang, Liang Hu, Yan Wang, Xiangnan He, Quan Z. Sheng, Mehmet A. Orgun, Longbing Cao, Francesco Ricci, and Philip S. Yu. 2021. Graph learning based recommender systems: A review. In Proceedings of the International Joint Conference on Artificial Intelligence. 4644–4652.
[97]
Tianle Wang, Lianghao Xia, and Chao Huang. 2023. Denoised self-augmented learning for social recommendation. In Proceedings of the 32nd International Joint Conference on Artificial Intelligence, IJCAI 2023. 2324–2331.
[98]
Xiao Wang, Deyu Bo, Chuan Shi, Shaohua Fan, Yanfang Ye, and Philip S. Yu. 2022. A survey on heterogeneous graph embedding: Methods, techniques, applications and sources. IEEE Transactions on Big Data 9, 2 (2022), 415–436.
[99]
Yu Wang and Qilong Zhao. 2022. Multi-order hypergraph convolutional neural network for dynamic social recommendation system. IEEE Access 10 (2022), 87639–87649.
[100]
Chunyu Wei, Yushun Fan, and Jia Zhang. 2022. High-order social graph neural network for service recommendation. IEEE Transactions on Network and Service Management 19, 4 (2022), 4615–4628.
[101]
Chunyu Wei, Yushun Fan, and Jia Zhang. 2023. Time-aware service recommendation with social-powered graph hierarchical attention network. IEEE Transactions on Services Computing 16, 3 (2023), 2229–2240.
[102]
Bin Wu, Lihong Zhong, Lina Yao, and Yangdong Ye. 2022. EAGCN: An efficient adaptive graph convolutional network for item recommendation in social internet of things. IEEE Internet of Things Journal 9, 17 (2022), 16386–16401.
[103]
Jiahao Wu, Wenqi Fan, Jingfan Chen, Shengcai Liu, Qing Li, and Ke Tang. 2022. Disentangled contrastive learning for social recommendation. In Proceedings of the ACM International Conference on Information & Knowledge Management. 4570–4574.
[104]
Le Wu, Junwei Li, Peijie Sun, Richang Hong, Yong Ge, and Meng Wang. 2022. Diffnet++: A neural influence and interest diffusion network for social recommendation. IEEE Transactions on Knowledge and Data Engineering 34, 10 (2022), 4753–4766.
[105]
Le Wu, Peijie Sun, Yanjie Fu, Richang Hong, Xiting Wang, and Meng Wang. 2019. A neural influence diffusion model for social recommendation. In Proceedings of the International ACM SIGIR Conference on Research and Development in Information Retrieval. 235–244.
[106]
Qitian Wu, Hengrui Zhang, Xiaofeng Gao, Peng He, Paul Weng, Han Gao, and Guihai Chen. 2019. Dual graph attention networks for deep latent representation of multifaceted social effects in recommender systems. In Proceedings of the ACM Web Conference (WWW). 2091–2102.
[107]
Shiwen Wu, Wentao Zhang, Fei Sun, and Bin Cui. 2022. Graph neural networks in recommender systems: A survey. ACM Computing Surveys 37, 4 (2022).
[108]
Lianghao Xia, Yizhen Shao, Chao Huang, Yong Xu, Huance Xu, and Jian Pei. 2023. Disentangled graph social recommendation. In Proceedings of the 39th IEEE International Conference on Data Engineering. 2332–2344.
[109]
Xinyu Xiao, Junhao Wen, Wei Zhou, Fengji Luo, Min Gao, and Jun Zeng. 2022. Multi-interaction fusion collaborative filtering for social recommendation. Expert Systems with Applications 205 (2022), 117610.
[110]
Yang Xiao, Qingqi Pei, Tingting Xiao, Lina Yao, and Huan Liu. 2021. MutualRec: Joint friend and item recommendations with mutualistic attentional graph neural networks. Journal of Network and Computer Applications 177 (2021), 102954.
[111]
Yang Xiao, Lina Yao, Qingqi Pei, Xianzhi Wang, Jian Yang, and Quan Z. Sheng. 2020. MGNN: Mutualistic graph neural network for joint friend and item recommendation. IEEE Intelligent Systems 35, 5 (2020), 7–17.
[112]
Xiaojun Xie, Xihuang Zhang, Honggang Luo, and Tao Zhang. 2022. Similarity-based multi-relational attention network for social recommendation. In Proceedings of the International Conference on Computing and Artificial Intelligence. 307–317.
[113]
Guandong Xu, Zhiang Wu, Yanchun Zhang, and Jie Cao. 2015. Social networking meets recommender systems: Survey. International Journal Social Network Mining 2, 1 (2015), 64–100.
[114]
Huance Xu, Chao Huang, Yong Xu, Lianghao Xia, Hao Xing, and Dawei Yin. 2020. Global context enhanced social recommendation with hierarchical graph neural networks. In Proceedings of the IEEE International Conference on Data Mining. 701–710.
[115]
Dengcheng Yan, Tianyi Tang, Wenxin Xie, Yiwen Zhang, and Qiang He. 2022. Session-based social and dependency-aware software recommendation. Applied Soft Computing 118 (2022), 108463.
[116]
Bo Yang, Yu Lei, Dayou Liu, and Jiming Liu. 2013. Social collaborative filtering by trust. In Proceedings of the International Joint Conference on Artificial Intelligence. 2747–2753.
[117]
Liangwei Yang, Zhiwei Liu, Yu Wang, Chen Wang, Ziwei Fan, and Philip S. Yu. 2022. Large-scale personalized video game recommendation via social-aware contextualized graph neural network. In Proceedings of the ACM Web Conference (WWW). 3376–3386.
[118]
Xiwang Yang, Yang Guo, Yong Liu, and Harald Steck. 2014. A survey of collaborative filtering based social recommender systems. Computer Communications 41 (2014), 1–10.
[119]
Haochao Ying, Liang Chen, Yuwen Xiong, and Jian Wu. 2016. Collaborative deep ranking: A hybrid pair-wise recommendation algorithm with implicit feedback. In Proceedings of the Pacific-Asia Conference on Knowledge Discovery and Data Mining, Vol. 9652. 555–567.
[120]
Junliang Yu, Hongzhi Yin, Min Gao, Xin Xia, Xiangliang Zhang, and Nguyen Quoc Viet Hung. 2021. Socially-aware self-supervised tri-training for recommendation. In Proceedings of the ACM SIGKDD Conference on Knowledge Discovery & Data Mining. 2084–2092.
[121]
Junliang Yu, Hongzhi Yin, Jundong Li, Min Gao, Zi Huang, and Lizhen Cui. 2022. Enhancing social recommendation with adversarial graph convolutional networks. IEEE Transactions on Knowledge and Data Engineering 34, 8 (2022), 3727–3739. DOI:
[122]
Junliang Yu, Hongzhi Yin, Jundong Li, Qinyong Wang, Nguyen Quoc Viet Hung, and Xiangliang Zhang. 2021. Self-supervised multi-channel hypergraph convolutional network for social recommendation. In Proceedings of the ACM Web Conference (WWW). 413–424.
[123]
Junliang Yu, Hongzhi Yin, Xin Xia, Tong Chen, Jundong Li, and Zi Huang. 2024. Self-supervised learning for recommender systems: A survey. IEEE Transactions on on Knowledge and Data Engineering 36, 1 (2024), 335–355.
[124]
Victoria Zayats and Mari Ostendorf. 2018. Conversation modeling on reddit using a graph-structured LSTM. Transactions of the Association for Computational Linguistics 6 (2018), 121–132.
[125]
He Zhang, Bang Wu, Xingliang Yuan, Shirui Pan, Hanghang Tong, and Jian Pei. 2024. Trustworthy graph neural networks: Aspects, methods and trends. Proceedings of the IEEE 112, 2 (2024), 97–139.
[126]
Lisa Zhang, Zhe Kang, Xiaoxin Sun, Hong Sun, Bangzuo Zhang, and Dongbing Pu. 2021. KCRec: Knowledge-aware representation graph convolutional network for recommendation. Knowledge-Based Systems 230 (2021), 107399.
[127]
Yongshuai Zhang, Jiajin Huang, Mi Li, and Jian Yang. 2022. Contrastive graph learning for social recommendation. Frontiers in Physics 10 (2022), 35.
[128]
Minghao Zhao, Qilin Deng, Kai Wang, Runze Wu, Jianrong Tao, Changjie Fan, Liang Chen, and Peng Cui. 2021. Bilateral filtering graph convolutional network for multi-relational social recommendation in the power-law networks. ACM Transactions on Information Systems 40, 2 (2021), 1–24.
[129]
Tong Zhao, Gang Liu, Stephan Günnemann, and Meng Jiang. 2023. Graph data augmentation for graph machine learning: A survey. IEEE Data Engineering Bulletin 46, 2 (2023), 140–165.
[130]
Yan Zhen, Huan Liu, Meiyu Sun, Boran Yang, and Puning Zhang. 2022. Adaptive preference transfer for personalized IoT entity recommendation. Pattern Recognition Letters 162 (2022), 40–46.
[131]
Li Zheng, Qun Liu, and Youmin Zhang. 2021. Social recommendation based on preference disentangle aggregation. In Proceedings of the International Conference on Big Data and Information Analytics. 1–8.
[132]
Zhi-Hua Zhou and Ming Li. 2005. Tri-training: Exploiting unlabeled data using three classifiers. IEEE Transactions on Knowledge and Data Engineering 17, 11 (2005), 1529–1541.
[133]
Peng Zhu, Dawei Cheng, Siqiang Luo, Fangzhou Yang, Yifeng Luo, Weining Qian, and Aoying Zhou. 2022. SI-News: Integrating social information for news recommendation with attention-based graph convolutional network. Neurocomputing 494 (2022), 33–42.
[134]
Zirui Zhu, Chen Gao, Xu Chen, Nian Li, Depeng Jin, and Yong Li. 2021. Inhomogeneous social recommendation with hypergraph convolutional networks. In Proceedings of the IEEE International Conference on Data Engineering.

Cited By

View all
  • (2024)Unraveling Privacy Risks of Individual Fairness in Graph Neural Networks2024 IEEE 40th International Conference on Data Engineering (ICDE)10.1109/ICDE60146.2024.00139(1712-1725)Online publication date: 13-May-2024

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Computing Surveys
ACM Computing Surveys  Volume 56, Issue 10
October 2024
954 pages
EISSN:1557-7341
DOI:10.1145/3613652
Issue’s Table of Contents
This work is licensed under a Creative Commons Attribution-NonCommercial International 4.0 License

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 22 June 2024
Online AM: 29 April 2024
Accepted: 13 April 2024
Revised: 17 February 2024
Received: 05 December 2022
Published in CSUR Volume 56, Issue 10

Check for updates

Author Tags

  1. Graph neural networks
  2. social network
  3. recommender systems
  4. social recommendation
  5. survey

Qualifiers

  • Survey

Funding Sources

  • NSF
  • Defense Advanced Research Projects Agency (DARPA)
  • Microsoft, Google, and The Home Depot
  • Institute of Information & communications Technology Planning & Evaluation (IITP)
  • Korean government (MSIT)
  • A High-Performance Big-Hypergraph Mining Platform for Real-World Downstream Tasks
  • A High-Performance Big-Hypergraph Mining Platform for Real-World Downstream Tasks
  • Artificial Intelligence Graduate School Program (Hanyang University)
  • Artificial Intelligence Graduate School Program (UNIST)

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)1,861
  • Downloads (Last 6 weeks)970
Reflects downloads up to 01 Sep 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Unraveling Privacy Risks of Individual Fairness in Graph Neural Networks2024 IEEE 40th International Conference on Data Engineering (ICDE)10.1109/ICDE60146.2024.00139(1712-1725)Online publication date: 13-May-2024

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Get Access

Login options

Full Access

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media