Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
survey

A Comprehensive Survey on Automatic Knowledge Graph Construction

Published: 30 November 2023 Publication History

Abstract

Automatic knowledge graph construction aims at manufacturing structured human knowledge. To this end, much effort has historically been spent extracting informative fact patterns from different data sources. However, more recently, research interest has shifted to acquiring conceptualized structured knowledge beyond informative data. In addition, researchers have also been exploring new ways of handling sophisticated construction tasks in diversified scenarios. Thus, there is a demand for a systematic review of paradigms to organize knowledge structures beyond data-level mentions. To meet this demand, we comprehensively survey more than 300 methods to summarize the latest developments in knowledge graph construction. A knowledge graph is built in three steps: knowledge acquisition, knowledge refinement, and knowledge evolution. The processes of knowledge acquisition are reviewed in detail, including obtaining entities with fine-grained types and their conceptual linkages to knowledge graphs; resolving coreferences; and extracting entity relationships in complex scenarios. The survey covers models for knowledge refinement, including knowledge graph completion, and knowledge fusion. Methods to handle knowledge evolution are also systematically presented, including condition knowledge acquisition, condition knowledge graph completion, and knowledge dynamic. We present the paradigms to compare the distinction among these methods along the axis of the data environment, motivation, and architecture. Additionally, we also provide briefs on accessible resources that can help readers to develop practical knowledge graph systems. The survey concludes with discussions on the challenges and possible directions for future exploration.
Appendices

A Applications of Knowledge Graph Construction

Research communities have been incorporating knowledge graph (KG) construction techniques into real-world AI applications, besides tasks to build KG systems. Applications of KG construction extend methods to construct KGs into more user-concerned scenarios, integrated with KG side information and task-related approaches. These advances have covered fake news detection, dialogue systems, and more applications. In this section, we review some remarkable achievements.

A.1 Recommendation Systems

A recommendation system predicts how users interact with related objects (such as items and other users). Methods incorporating KB systems for this task have effectively handled the cold-start and data-sparsity problems in big data scenarios.
Current knowledge-enhanced applications also focus on building KG structures in user-item interactive social networks while combining KG side information via embedding-based approaches [245]. DKN [246] performs entity linking on the related news content items to the external knowledge base (KB) for obtaining a subgraph to learn knowledge embeddings. The model then utilizes an attention-based method to aggregate user embeddings for interpreting user-item interactions for recommendation tasks. To incorporate cross-task features via end-to-end training, KGeRec [247] utilizes feature interaction units to combine the KG construction and the recommendation task to gather deep conceptualized semantics. Meanwhile, previous designs do not fully consider high-order semantics within the graph structures. KRAN [6] proposes a GCN-based framework to obtain neighborhood features that denotes intrinsic user preferences. Here, the presented knowledge-refining attention mechanism captures inter-entity attention coefficients for the recommendation.

A.2 Fake News Detection

Fake news detection judges whether a factual statement reflects the truth. In practice, applications for such tasks apply NER and EL methods to dock on KG systems, then compare knowledge features with contextual features to determine trustworthy contents.
As an early achievement, Pan et al. [248] separately build KGs from real/fake news sources to obtain KG embeddings via the TranE model. The model then performs fact-checking by contrasting the related knowledge embeddings from different content sources. To further incorporate external existing KG information with content features, KAN [249] recognizes entities in contents and aligns them with an external KG, then incorporates feature embeddings via attention mechanisms to detect factual information. However, the previous methods underestimate long-term context characteristics in the content. Furthermore, CompareNet [8] utilizes a heterogeneous document-level graph to capture interactive features among sentences and compare embeddings with high-order contextual features to a KB. More knowledge-aware models also consider multi-modal information for assessment. For example, KMAGCN [250] unifies textual and multi-modal and KG information via an adaptive graph convolution network (GCN), achieving breakthroughs in different scenarios.

A.3 Dialog Systems

A dialogue system serves the demands for human-machine conversations in natural languages. A KG-based dialogue system traverses a KG system based on user inputs via KG walk approaches and performs task-related knowledge reasoning and semantic augmentation for generating consistent multi-round responses.
Many applications for knowledge-based multi-turn conversation tasks handle this challenge via encoder-decoder architectures. Moon et al. [251] aggregate dialogue-level semantics of inputs via multiple Bi-LSTM encoders and utilize a KG walker (reasoner) based on attention-based pruning for decoding responses. Similarly, KCMC [7] presents a generative seq2seq solution that consolidates hierarchical attention-based dialogue contextual encoder and knowledge-enhanced embedding via dynamic graph attention decoder for knowledge reasoning. KCMC generates fluent user-concerned responses through high-performance copying mechanisms rather than conventional ranking. Further, more efforts also concentrate on diversified users’ interests in open-domain conversations. DKRN [252] intriguingly proposes a dynamic knowledge routing strategy of reasoning to retrieve conceptualized information for automatical personalized dialogues. Designing more effective solutions for complex conversations is now a prevailing trend.

B More KG-related Resources

B.1 More Practical KG Datasets

In this part, we portray more practical KG projects for readers. We provide information on the KG projects in Table 5.
Table 5.
CategorizationProjectKG InclusionYearURL
Encyclopedia KGYAGO2B+ facts, 64M+ entities2007https://yago-knowledge.org
Freebase360M+ fact triples2007https://freebase-easy.cs.uni-freiburg.de/dump/
DBpedia320 classes with 1, 650 different properties, 247M+ triples2007https://github.com/DBpedia/
CN-DBpedia9M+ entities, 67M+ triples2015http://kw.fudan.edu.cn/cnDBpedia/download/
Probase5.4M+ concepts2010https://concept.research.microsoft.com/
Wikidata96M+items2012https://www.wikidata.org/wiki
CN-Probase17M+ entities, 33M+ “is-a” relations2017http://kw.fudan.edu.cn/apis/cnprobase/
Linguistic KGWordNet117 000 synsets1985https://wordnet.princeton.edu/
ConceptNet34M+ items1999https://www.conceptnet.io/
HowNet35, 202 concepts, 2, 196 sememes1999https://openhownet.thunlp.org/download
Babelnet13M nodes2010http://babelnet.org/rdf/page/
THUOCL157K+ word nodes in 7.3B+ documents2016http://thuocl.thunlp.org/
Commonsense KGOpenCyc2M+ fact triples1984https://sourceforge.net/projects/opencyc/
ASER438M+ nodes, 648M+ edges2020https://github.com/HKUST-KnowComp/ASER
TransOMCS18M+ tuples2020https://github.com/HKUST-KnowComp/TransOMCS
Enterprise support KGGoogle Knowledge Graph500B+ facts on 5B+ entities2012https://developers.google.com/knowledge-graph
Facebook Graph Searchdynamic social network of users, User-generated contents2013https://developers.facebook.com/docs/graph-api/
Domain-specific KGDrugbank14K+ drug entities2006https://go.drugbank.com/releases/latest
AMiner ASN2M+ paper nodes, 8M+ citation relations2007https://www.aminer.cn/aminernetwork
Huapu17M+ person nodes2017https://www.zhonghuapu.com/
OAG369M+ authors, 380M+ papers 92M+ linking relations2017https://www.aminer.cn/data/?nav=openData#Open-Academic-Graph
COVID-19 Concepts4784 entities, 35172 relation links2020http://openkg.cn/dataset/covid-19-concept
Aminer COVID-19 Open Datareports, news, research and other achieves of COVID-192020https://www.aminer.cn/data-covid19/
CPubMed-KG3.9M+ tuples2021https://cpubmed.openi.org.cn/graph/wiki
Federated KGGEDMatch1.2M+ DNA profiles2010https://www.gedmatch.com/
OpenKG.cn200+ datasets from 94 organizations2015http://www.openkg.cn
Table 5. The Information of Practical KG Projects

B.1.1 Encyclopedia KGs.

After the early attempt of the DBpedia project [40] (developed from Wikipedia), more KG projects incorporate automatic extraction tools, such as Freebase [9], Wikidata, and CN-DBpedia [253] (developed from Wikipedia, Baidu Baike, Hudong Baike, and automatically-extracted content). Max-Planck-Institution develops YAGO [254] integrates temporal and geographical structures in Wikipedia with WordNet ontology. Minz et al. [119] applied distance supervision Freebase for automatical entity-relationships annotation. KGs of eventualities are also concerned by the research community. CN-Probase [255] extends Probase with concepts in Chinese to comprehend general modes of textual data that involve uncertain occurrences.

B.1.2 Linguistic KGs.

Besides WordNet [43], BabelNet [256] extends WordNet with cross-lingual attributes and relations of words from encyclopedias. ConceptNet [257], as a part of the Link Open Data, gathers conceptual knowledge based on crown sourcing, while HowNet [258] manually collects sememe (minimum indivisible semantic units) information of word concepts and attributes. THUOCL [259] records the document frequency (DF) of the words from the well-filtered web corpus. Developers create high-performance word embeddings based on well-built linguistic KGs for downstream applications.

B.1.3 Commonsense KGs.

Besides OpenCyc [44], ASER [24] provides a weighted KG that describes commonsense by modeling entities of actions, states, events, and relationships among these objects, which acquire its nodes via dependency patterns selection and conceptualized by Probase. TransOMCS [23] develops an auto-generated dataset covering 20 commonsense relations obtained from linguistic graphs.

B.1.4 Enterprise Support KGs.

Similar to Google Knowledge Graph (GKG) [45], Facebook Graph Search delivers the powerful semantic search engine of Facebook, providing user-specific answers through the dynamic Facebook social KB.

B.1.5 Domain-specific KGs.

Besides Drugbank [46], CPubMed-KG innovatively develops a medical KG presented in Chinese. Many KG collection efforts are also contributed to fighting against the COVID-19 pandemic, such as the COVID-19 Concept dataset and Aminer COVID-19 Open Data. As for academic activities, the Academic Social Network (ASN) of AMiner [260] and Open Academic Graph (OAG) [261] discloses academic activities including social networks and papers.

B.1.6 Federated KGs.

Federation strategies have been applied to more KG systems with sensitive data to build integrated knowledge models while preventing data exchange. Researchers also focus on federated KG platforms. OpenKG.cn [262] as a crowd-sourcing community, provides a knowledge-sharing platform to develop knowledge applications with federated learning while supporting the decentralization of knowledge blockchains.

B.2 More Off-the-shelf KG Tools

In this part, we review more off-the-shelf tools for KG construction. The details of these tools are presented in Table 6.

B.2.2 Knowledge Acquisition.

Early toolkits that directly extract fact triples through rules, patterns, and statistic features, which are also known as Information Extraction (IE) toolkits. Besides KnowItAll [12], more toolkits leverage semi-supervision designs to collect relational information, such as TextRunner [11], ReVerb [263] that produces refined verbal triples via syntactic and lexical information and OLLIE [264] that supports non-verbal triples discovery.
Many NLP applications can direct achieve knowledge acquisition sub-tasks, including NER, RE, and CO tasks, or provide linguistic features for their related applications. NLTK [265] and StanfordNLP [266] are powerful toolkits for knowledge acquisition based on statistic-based algorithms like CRF, MEM, which can also provide background features such as POS tags and NP chunks. TableMiner+ [267] and MantisTable [268] extract knowledge from semi-structured table forms.
Recent developers have been drawn into DL-based toolkits. spaCy [269] is a comprehensive practical NLP toolkit that integrates NeuralCoref for CO tasks, and also provides a trainable deep-learning (DL) module for specialized relation extraction (RE) (in spaCy v3). OpenNRE [270] provides various extensible neural network models such as CNN and LSTM to perform supervised RE.

B.2.3 Knowledge Refinement.

Besides integrated DL-based toolkits OpenKE [49] and OpenEA [50], OpenNE integrates embedding models such as Node2VEC [271] and LINE [272] to obtain global representations from a complete KG for completions. As for the knowledge fusion (KG merging) task, Falcon-AO [273] utilizes multiple algorithms to measure semantic similarity for aligning concepts in different notations.

C More Discussions On Knowledge Graph Construction

In this part, we present more discussions over directions and challenges for KG construction techniques.

C.1 Strategies for Privacy Protection in Knowledge Graph Completion and Other Applications

Practical KG completion models and other high-performance KG-related applications rely on embedding learning on large datasets. However, these data sources may carry user-provided sensitive data, where privacy violations could happen if machine learning is directly performed. Strategies to prevent privacy violations are thus proposed to serve the mandatory requirement for safe KG applications. Current mainstream research mainly focuses on federated learning and differential privacy to achieve privacy protection.
Federated Learning is an enlightening direction to serve privacy protection while performing machine learning for KG completion and other tasks. A federated setting for KGs that trains model ensembles from multi-sources is one of the popular strategies. Significant advances have been conducted to federated knowledge embeddings to achieve safe KG completion, such as FKGE [274] and FedE [275], which prohibit data exchange while incorporating cross-modal features during training. However, entity alignment for knowledge fusion is a paradoxical bottleneck that impedes federated learning for complex completion tasks and other applications, requiring multi-source KGs to be shared before model learning, which will exchange sensitive information during knowledge fusion. How to create a privacy-reserved super feature space for encrypted entity alignment while federating features is still open for exploration. Designing more privacy-friendly models for constructing KGs is critical for sensitive data scenarios. We illustrate the procedure of developing a federated model in Figure 20.
Fig. 20.
Fig. 20. An illustration of building a federated model from different knowledge providers while protecting privacy. In this procedure, an encrypted entity alignment process is performed before training separate models on multi-source data parts, then a collaborator calculates and aggregates encrypted gradients of each model to prevent leakage. A federated model only reserves data-invisible crowd-sourced knowledge features.
Even trained on a sanitized database, a KG embedding model serving completion or other purposes may still expose sensitive information about identities when minor variations on these anonymized features are analyzed. Therefore, another challenging direction is proposed to thwart identity detection through such attacks on a sanitized database. Differential Privacy (DP) [276] is an insightful strategy to anonymize sensitive data so that each near data piece is not distinguishable from the other, thus, privacy attacks by contrasting minor features will be compromised. Currently, efforts have incorporated differential privacy in deep learning models, such as HGM [277] with robust DP strategies. Besides, ensuring the fairness of DP-outputted properties of different individuals or groups is still open for further discussion [278]. The objective of “fairness”, requiring anonymized data output can be viewed equally, may have polarized impacts on various scenarios. As for candidate selection tasks, “fairness” can be guaranteed with well-designed DP strategies [279], while researchers find such “fairness” targets may impair privacy-preserving efforts in data of census statistics [280]. More ideas should be considered to handle the tradeoff between privacy and fairness.

C.2 Fabricated Information Detection

The information generated by generative AI or concocted by humans brings a new challenge for KG systems, requiring developers to distinguish unrealistic or unoriginal inputs from reliable data.
Unlike data noises handled by models for triple classification, information faked by humans on purpose is more confusing with hard-to-tell features. Current research works have focused on detecting fake news by comprehending context features, such as CompareNet [8] for text information and KMAGCN [250] for multi-modal information. Explainable models [281] are also considered to serve this goal. Detecting fabricated information from generative AI is more formidable. As for generated images, many models [282] based on deep learning have been devised to tackle face photos manipulated by DeepFake. More attention should also be paid to the generated data in other scenarios like events and sceneries. Furthermore, texts generated by a language model (like ChatGPT [283]) also present unoriginal or low-quality content. DetectGPT [284] proposes a solution by analyzing the probability distribution of a perturbed text to pick model-generated parts. Tools like GPTZero also manage to judge such content by different means. However, these tools still present many inaccuracy judgments. Generative models with adversarial learning mechanisms may compromise the regular efforts for detection to cause this. It is also worth reminding readers that excessive data collection can unintentionally make a human write like an AI. Expert decisions and legislative actions would also be needed to alleviate these problems.

C.3 Incorporating Generative AI and Other Large Pre-trained Models for Knowledge Graph Construction

Large pre-trained models have been leading a significant impact on multiple KG construction tasks and their related applications. KG-BERT [285] incorporates knowledge triples as sequences to train with, achieving breakthroughs in KG completion tasks. DRAGON [286] proposes a self-supervision strategy to consolidate KG with textual information for obtaining deep advanced representations, effectively enhancing model performances on question-answering tasks with complex reasoning. Developing intriguing strategies utilizing large KG structures for pre-training language models to serve KG construction and other applications is promising. Directly extracting intrinsic facts from language models will also be a new direction for interpretable machine learning. Petroni et al. [287] discover that large language models only trained on contextual information like BERT [26] may contain or potentially possess oracle knowledge about entity relations. They propose a LAMA Probe to evaluate explicable relational knowledge within these models by a simplified cloze test. Furthermore, such attempts may also unveil how structured knowledge emerges from context awareness.
Recently, advances in generative AI have also delivered new directions for KG construction. A recent survey [242] has reported generative models that turn multiple KG construction tasks into seq2seq structure prediction tasks, which can better comprehend complex task-related knowledge. These advances serving generative KG construction have provided a promising unified paradigm to probe into tasks like entity linking, RE, and KG completion. The survey also points out that training efficiency and the generation quality of models remain open for future improvement.
Another problem is absorbing generated information for KG construction, such as gathering data from commonsense text generation tasks. KG-BART [288] provides a solution based on pre-train models to obtain a text about a concept set from a KB. For example, given {fish, catch, river, net}, the text “fishermen use strong nets to catch plentiful fishes in the river.” will be generated by the model. Such accountable data can enrich a KG system with background knowledge in a pre-train model. In terms of AI-generated factoid content, a KG system utilizes it for data augment. Such generated unseen objects like “a WWI battlefield using magic as weapons” are not likely to be reasonable real-world facts, which may not be the appropriate content for display. It is worth pointing out that completing a KG with fabricated data could be misleading or profile users with stereotypes (like generating a man’s self-portrait based on his birthplace). Such technical abuses should not be encouraged.

C.4 Long and Intricate Contexts for KG Construction

Intricate cross-sentence or cross-paragraph contexts impede different KG construction sub-tasks for practical use, especially RE tasks. It is worth reminding readers that complex contexts do not merely relate to long-term dependency. Yao et al. [27] point out that four kinds of inferences including pattern recognition, coreference reasoning, logic reasoning, and commonsense reasoning, are also critical to contain high-order contextual semantics. A specific example is presented in Figure 21.
Fig. 21.
Fig. 21. An example of relation inference over long contexts in a document.
A model that handles complex long contexts should focus on intricate cross-sentence patterns while performing reasoning over multiple linguistic objects. Besides document-level extraction models in Section 3.3.6, Some efforts in Section D.2 also model document-level contexts via heterogeneous models for entity typing (ET). Noticeably, ambiguous expressions may occur in user-generated texts, which are usually not correctly interpreted by models without external information. Another challenging issue for reasoning is multi-hop reasoning. More linguistic structures should be explored to comprehend tortuous expressions.
Out-of-context expressions requiring background knowledge to handle are bottlenecks for KG construction. The obstacles are mainly two-fold: (1) spontaneous knowledge, and (2) evidence support. Commonsense knowledge spontaneously generated is often utilized to derive new facts, e.g., man and woman who have kids should be couples/partners, despite such convictions sometimes inaccuracy. How to obtain commonsense rules and adapt them to suitable scenarios is an important direction. Meanwhile, many document-level datasets do not contain evidence information for correct logic paths. Efforts like [289] have probed into document-level evidence structures for relation mentions. However, it is not likely to foresee that a model can learn to organize clues correctly to resolve facts in all scenarios (e.g., validating the conclusion in a philosophical book). We believe long-context is not merely an NLP question, and models [290] understanding linguistic expressions will be a critical direction. Furthermore, conditions like temporal and geographical information provided by data sources should also be considered for rigorously comprehending contexts.

D More Advances for Knowledge Graph Construction

Besides mainstream models for different KG Construction sub-tasks, there are also many other innovative or practical attempts to work on different scenarios. In this part, we present more advances for enlightening readers to design novel construction solutions.

D.1 Rule-based Methods for Knowledge Acquisition

Many early attempts focus on rules that achieve knowledge acquisition or its sub-tasks. Despite inaccuracy in big data environments, rule-based methods are practical solutions to quickly extract massive raw knowledge. These methods also work in scenarios where high-performance computing is not available.
Rule-based approaches [291] are the general solutions for NER. As for semi-structured web data, Wrapper inductions generate rule wrappers to interpret semi-structures such as DOM tree nodes and tags for harvesting entities from pages. Some rule-based solutions are unsupervised, which require no human annotations, such as Omini [292]. As for entities in table forms, many approaches are proposed based on property-attribute layouts of Wikipedia, such as rule-based tools [40][254] for DBpedia, and YAGO. For unstructured data, classic NER systems [293] also rely on manually-constructed rule sets for pattern matching. Semi-supervised approaches are developed to improve rule-based NER by iteratively generating refined new patterns via pattern seeds and scoring, such as Bootstrapping-based NER [294].
Methods focusing rules are the earliest attempts for RE tasks on different data structure kinds, gathering strings that fit in hand-craft templates, e.g., “$PEOPLE is born in $LOCATION.” refers to ($PEOPLE, born-in, $LOCATION). However, these unsupervised strategies rely on complex linguist knowledge to label data. Later, researchers concentrate on automatical pattern discovery for triples mining. Semi-supervision design is an enlightening strategy to reduce hand-craft features and data labeling that uncovers more reliable patterns based on a small group of annotated samples, such as DIPRE [295] iteratively extracting patterns with seeds, bootstrapping-based KnowItAll [12] and Snowball [296] equipping DIPRE with confidence evaluation. Some rule-based models consider more lexical objects for mining. OLLIE [264] incorporates lexical structure patterns with relational dependency paths in texts. MetaPAD [297] combines lexical segmentation and synonymous clustering to meta patterns that are sufficiently informative, frequent, and accurate for relational triples. Specifically for semi-structured tables, researchers design table structure-based rules to acquire relationships arranged in rows, columns, and table headers, such as [298]. Furthermore, Some semi-structured extraction systems utilizing distant supervision tolerate potential errors, which directly query external databases like DBpedia and Wikipedia to acquire relationships for the found entities in tabular data, such as [70], [299], and [300]. Similarly, Muñoz et al. [300] look up the Wikipedia tables for labeling relationships in tabular forms. Krause et al. [301] also expand rule sets for RE via distant supervision.
Rule-based models to perform end-to-end knowledge acquisition are lightweight solutions for specific domains. However, these designs require extra work for maintenance if the domain changes.

D.2 More Embedding-based Models

Embedding-based models lay the foundation for KG completion while providing semantic support for different sub-tasks for knowledge acquisition from semi-structured or unstructured data.
More variants for translation embedding (TranE) models for KG completion have been developed to search entity-relation feature space via mapping matrices like TransR [302] and TransH [303]. Meanwhile, researchers also consider more tensor-based empirical models for embedding over a completed large graph, such as RESCAL [304] and DistMult [305]. Some knowledge representation models leverage non-linear neural networks to exploit deep knowledge embedding features for KG completion, such as ConvE [306], M-DCN [307], and TransGate [308]. Unstructured entity descriptions are also incorporated for feature enhancement, such as the DKRL model [309] and ConMask model [310]. GCNs are also presented to encode a KG, such as R-GCN [311], W-GCN [312], and COMPGCN [313]. GCNs can also comprehend neighborhood information through semantic diffusion mechanisms. ProjE [314] projects an entity and a relation to distinctive feature spaces through neural operations for capturing another candidate for missing entities. However, when the relation element is missing, a latent vector space of relationship candidates cannot be retrospected. SENN [315] bridges the disparity-distribution-space semantic gaps by multi-task embedding sharing strategy unifying relation, head entity, and tail entity link prediction.
As for ET, novel embedding-based models avail of combing global graph structure features and background knowledge for predicting potential types of entities via representations. Researchers reported that the classical TransE model acts poorly while directly applied to ET tasks. Moon et al.[154] propose the TransE-ET model adjusting the TransE model by optimizing the Euclidean distance between entities and their types representations, limited by insufficient entities types and triples features. New solutions aim at constructing various graphs to share diversified features of entity-related objects for learning embeddings with entity-type features. PTE [17] reduces data noise via a partial-label embedding, which constructs a bipartisan graph between entities and all their types while connecting entities nodes to their related extracted text features. Finally, PTE utilizes the background KG by building a type hierarchy tree with the derived correlation weights. JOIE [316] embeds entity nodes in the ontology-view graph and instance graphs, gathering entity types by top-k ranking between entity and type candidates. Likewise, ConnectE [317] maps entities onto their types and learning knowledge triples embeddings. Practical models improving embeddings on heterogeneous graphs for ET tasks (in Xlore project [42]) also include [318], [319], [320]. We present graph structures for embedding model-based ET in Figure 22.
Fig. 22.
Fig. 22. Illustration of embedding-based ET via heterogeneous graph structures. (PTE [17]).
Embedding-based models are also critical solutions for entity linking via entity embeddings. LIEGE [321] derives distribution context representations to links entities for web pages. Early researchers [322] leverage Bag-of-word (BoW) for contextual embeddings of entity mentions, then performed clustering to gather linked entity pairs. Later, Lasek et al. [323] extend the BoW model with linguistic embeddings for EL tasks. Researchers also focus on Deep representations for high-performance linking. DSRM [324] employs a deep neural network to exploit semantic relatedness, combining entity descriptions and relationships with types features to obtain deep entity features for linking. EDKate [325] jointly learns low-dimensional embedding of entities and words in the KB and textual data, capturing intrinsic entity-mention features beyond the BoW model. Furthermore, Ganea and Hofmann [18] introduce an attention mechanism for joint embedding and passed semantic interaction for disambiguation. Le and Titov [19] model the latent relations between mentions in the context for embedding, utilizing mention-wise and relation-wise normalization to score pair-wise coherence score function.
Researchers also focus on embedding-based distribution models over multiple semantic structures to handle coreference resolution (CO). Durrett and Klein [326] utilize antecedent representations to enable coreference inference through distribution features. Martschat and Strube [327] explore distribution semantics over mention-pairs and tree models to enhance coreference representations, directly picking robust features to optimize the CO task. Chakrabarti et al. [328] further employ the MapReduce framework to cover anaphoric entity names through query context similarity.
As for joint RE, novel distribution embedding-based models are proposed to model the cross-task distributions to bridge the semantic gaps between NER and RC. Ren et al. [329] propose a knowledge-enhanced distribution CoType model for joint extraction tasks. In this model, entity pairs are firstly mapped onto their mentions in the KB, then tagged with entity types and all relation candidates provided by the KB. This model learns embeddings of relation mentions with contextualized lexical and syntax features while training embeddings of the entity mentions with their types, then the contextual relation mention will be derived by its head and tail entities embeddings via TranE [330] model. The CoType model assumes interactive cooccurrence between entities and their relation labels, filling the distribution discrepancy with knowledge from the external domain and extra type features. Noticeably, this model also effectively prevents noises in distant-supervised datasets. However, feature engineering and extra KBs are also needed.

D.3 Rule-Mining Methods for Relation Path Reasoning

Many efforts focus on automatically mining logic rules to pave reasoning paths. There are methods for rule discovering, such as AMIE [177], RLvLR [178], and RuleN [331]. Instead of searching for promising relation path patterns approaching the symbolic essence of knowledge, the rule mining approaches extract and prune logic rules from a reasonable KG, then perform link prediction via the collected rule templates. However, unseen knowledge paths cannot be easily derived by logical rules in incomplete graphs.
Another research direction is to fuel logic rules into neural models to boost path reasoning. KALE [332] jointly embeds first-order logic rules with knowledge embedding to enhance relation inference. RUGE [333] iteratively rectifies KG embeddings via learned soft rules and then performs relation path reasoning. Logic rules are leveraged as the side semantic information into neural models. NeuralLP [179] proposes a neural framework to encode logic rule structures into vectorized embeddings with an attention mechanism. pLogicNet [180] introduces Markov logic network (MLN) to model uncertain rules. ExpressGNN [181] further employs GCNN to solve neighborhood graphic semantics with logic rules. These rule-based neural models are also regarded as the application of differentiable learning availing for gradient-based optimization algorithms on logic programming.

D.4 Other Advances

Researchers explore more strategies for flexible NER tasks. Transfer Learning shares knowledge between different domains or models. Pan et al. [334] propose Transfer Joint Embedding (TJE) to jointly embed output labels and input samples from different domains for blending intrinsic entity features. Lin et al. apply [335] a neural network with adaptation layers to transfer parameter features from a model pre-trained on a different domain. Reinforcement Learning (RL) puts NER models to interact with the environment domain through a behavior agency with a reward policy, such as the Markov decision process (MDP)-based model [336] and Q-network enhanced model [337]. Noticeably, researchers [338] have also leveraged the RL model for noise reduction in distant-supervised NER data. Adversarial Learning generates counterexamples or perturbations to enforce the robustness of NER models, such as DATNet [339] imposing perturbations on word representations and counterexamples generators ([340] and [341]). Moreover, Active Learning, which queries users to annotate selected samples, has also been applied to NER. Shen et al.[342] incrementally chose the most samples for NER labeling during the training procedures to mitigate the reliance on tagged samples.
Few-shot/zero-shot ET is an intricate challenging issue. Ma et al. [343] model the prototype of entity label embeddings for zero-shot fine-grain ET, naming Proto-HLE, which combines prototypical features with hierarchical type labels for inferring essential features of a new type. Zhang et al. [344] further propose MZET that exploits contextual features and word embeddings with a Memory Network to provide semantic side information for few-shot ET.
More probabilistic-based models are developed for EL tasks. Guo et al. [345] propose a probabilistic model for unstructured data, that leverages the prior probability of an entity, context, and name when performing linking tasks with unstructured data. Han et al. [346] employed a reference graph of entities, assuming that entities co-occurring in the same documents should be semantically related.
Joint models for NER and EL reduce error propagation of the pipeline-based entity recognition tasks. NEREL [347] couples NER and EL by ranking extracted mention-entity pairs to exploit the interaction features between entity mentions and their links. Graphic models are also effective designs to combine Named Entity Normalization (NEN) labels that convert entity mentions into unambiguous forms, e.g., Washington (Person) and Washington (State). Li et al. [348] incorporated EL with NEN utilizing a factor graph model, forming CRF chains for word entity types and their target nodes. Likewise, MINTREE [349] introduces a tree-based pair-linking model for collective tasks.
Cluster-based solutions handle the CO (Coreference Resolution) task as a pairwise binary classification task (co-referred or not). Early cluster models aim at mention-pair features. Soon et al. [350] propose a single-link clustering strategy to detect anaphoric pairs. Recasens et al. [351] further develop a mention-pair-based cluster to emanate a coreference chain or a singleton leaf. Later, researchers concentrate on entity-based features to exploit complex anaphoric features. Rahman and Ng [352] propose a mention-ranking clustering model to dive into entity characteristics. Stoyanov and Eisner [353] develop agglomerative clustering to merge the best clusters with entity features.
Early researchers concentrate on intriguing statistical-based features for fast end-to-end joint RE, such as Integer Linear Programming (ILP)-based algorithm [354] solving entities and relations via conditional probabilistic model, semi-Markov chain model [355] jointly decoding global-level relation features, and MLNs [356] modeling joint logic rules of entity labels and relationships. Early attempts deliver prototypes of entity-relationship interactions. However, statistical patterns are not explicit for intricate contexts.
Few-shot RC designs also consider feature augmentation strategies to mitigate data deficiency with intriguing model designs and background knowledge. Similar to [95], Levy et al. [357] turn zero-shot RC into a reading comprehension problem to comprehend unseen labels by a template converter. Soares et al. [358] compose a compound relation representation for each sentence by the BERT contextualized embeddings of entity pairs and the corresponding sentence. GCNs also deliver extra graph-level features for few-shot learning. Satorras and Estrach [359] propose a novel GCN framework to determine the relation tag of a query sample by calculating the similarity between nodes. Moreover, Qu et al. [360] employ posterior distribution for prototypical vectors. Some designs also avail semi-supervised data augmentation based on metric learning. The previous Neural Snowball [121] (based on RSN) labels the query set via the Siamese network while drawing a similar sample candidate from external distant-supervised sample sets to enrich the support set.
Many early attempts develop random-walk models for relation path reasoning that infer relational logic paths in a latent variable logic graphic model. Path-Ranking Algorithm (PRA) [361] generates a feature matrix to sample potential relation paths. However, the feature sparsity in the graph impedes random walk approaches. Semantic enrichment strategies are proposed to mitigate this bottleneck, such as inducing vector space similarity [362] and clustering associated relations [363].
Early attempts aim at the unique attributes of entities for entity matching. Many models leverage distance-based approaches to distributional representations of entity descriptions or definitions. VCU [364] proposes first-order and second-order vector models to embed the description words of an entity pair for comprehensively measuring the conceptual distance. TALN [365] leverages sense-based embedding derived by BabelNet to combine the definitional description of words, which first generates the embedding of each filtered definition word combined with POS-tagger, syntax features via BabelNet, then averages them to obtain a centroid sense to obtain the best matching candidates. String-similarity-based models available for entity matching also include TF-IDF [366] and I-Sub [367].
Graph-based methods achieve feasible performance for entity matching on the medium-scale KG that consists of hierarchical graph structures. ETF [368] learns concept representations through semantic features and graph-based features, including Katz similarity, random walk betweenness centrality, and information propagation score. ParGenFS [369] leverages a graph-based fuzzy cluster algorithm to conceptualize a new entity. This method stimulates the thematic distribution to acquire distinctive concept clusters to search the corresponding location of an entity update in a target KG.
Entity alignment tasks can also be handled by text-similarity-based models that detect surficial similarity between entities when considering the tradeoff between performance and computation cost. Rdf-ai [370] proposes a systematic model to match two entity node graphs, which leverages the string-matching and lexical-feature-similarity comparing algorithms to align available attributes, then calculates the entity similarity for alignment. Similarly, Lime [371] further leverages metric spaces to detect aligned entity pairs, which first generate entity exemplars to filter alignable candidates before similarity computation for entity fusion. Different from small-scale KGs, the shaped large KGs contain meaningful relational paths and enriched concept taxonomy. HolisticEM [372] employs IDF score to calculate the surficial similarity of entity names for seed generating and utilizes Personalized PageRank (PPR) to measure distances between entity graphs by traversing their neighbor nodes.

F Knowledge Graph Storage

In this section, we provide a brief overview of KG storage tools for different data environments.
Early efforts utilize relational models to perpetuate constructed KGs. Traditional RDBMS provides reliable and swift CRUD operations for table-formed databases. Developers have also employed graph algorithms like depth-first traverse and shortest-path search to enhance relational databases. Ref. [2] includes representative examples like PostgreSQL [389], and filament. However, it can be very costly for a relational database to handle sparse KGs or perform data partition for distribution storage.
Key/value databases are lightweight solutions for saving clusters in large KGs, supporting distributed storage with a simplified flexible data format. Trinity [390] provides a high-performance in-memory Key/Value storage system to manage large KGs with billion nodes, such as Probase. CouchDB [391] utilizes a replication mechanism to maintain dynamic KGs. MapReduce technology automatically transforms data groups into key/value mappings. Hadhoop enables high-throughput parallel computing for KG storage via MapReduce. Pregel [392] develops a superstep mechanism to share messages between vertices for parallel computing.
Another enlightening direction is to design graph databases that fit in knowledge triple structures. Neo4j[393] is a lightweight NoSQL-based graph database supporting embedded dynamic KG storage. SOnes provides object-oriented queries for KG databases. Novel languages are also developed for knowledge storage, such as resource description framework (RDF) and Web Ontology Language (OWL). Some graph databases based on RDF optimize the storage of graph structures. For example, gStore [392] improves RDF-structured KG databases via sub-graph matching algorithms.

References

[1]
Heiko Paulheim. 2017. Knowledge graph refinement: A survey of approaches and evaluation methods. Semantic Web 8, 3 (2017), 489–508. DOI:
[2]
Jihong Yan, Chengyu Wang, Wenliang Cheng, Ming Gao, and Aoying Zhou. 2018. A retrospective of knowledge graphs. Frontiers of Computer Science 12, 1 (2018), 55–74. DOI:
[3]
Xindong Wu, Jia Wu, Xiaoyi Fu, Jiachen Li, Peng Zhou, and Xu Jiang. 2019. Automatic knowledge graph construction: A report on the 2019 ICDM/ICBK contest. In Proceedings of the 2019 IEEE International Conference on Data Mining. 1540–1545. DOI:
[4]
Shaoxiong Ji, Shirui Pan, Erik Cambria, Pekka Marttinen, and Philip S. Yu. 2022. A survey on knowledge graphs: Representation, acquisition, and applications. IEEE Transactions on Neural Networks and Learning Systems 33, 2 (2022), 494–514. DOI:
[5]
Aidan Hogan, Eva Blomqvist, Michael Cochez, Claudia d’Amato, Gerard de Melo, Claudio Gutierrez, Sabrina Kirrane, José Emilio Labra Gayo, Roberto Navigli, Sebastian Neumaier, Axel-Cyrille Ngonga Ngomo, Axel Polleres, Sabbir M. Rashid, Anisa Rula, Lukas Schmelzeisen, Juan Sequeda, Steffen Staab, and Antoine Zimmermann. 2021. Knowledge Graphs. Morgan & Claypool Publishers. DOI:
[6]
Zhenyu Zhang, Lei Zhang, Dingqi Yang, and Liu Yang. 2022. KRAN: Knowledge refining attention network for recommendation. ACM Transactions on Knowledge Discovery from Data 16, 2 (2022), 39:1–39:20. DOI:
[7]
Chunquan Chen and Si Li. 2020. Knowledge-based context-aware multi-turn conversational model with hierarchical attention. In Proceedings of the 2020 International Joint Conference on Neural Networks. 1–8. DOI:
[8]
Linmei Hu, Tianchi Yang, Luhao Zhang, Wanjun Zhong, Duyu Tang, Chuan Shi, Nan Duan, and Ming Zhou. 2021. Compare to the knowledge: Graph neural fake news detection with external knowledge. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). 754–763. DOI:
[9]
Kurt D. Bollacker, Robert P. Cook, and Patrick Tufts. 2007. Freebase: A shared database of structured general human knowledge. In Proceedings of the AAAI-07, 2007. 1962–1963. Retrieved from http://www.aaai.org/Library/AAAI/2007/aaai07-355.php
[10]
Denny Vrandecic. 2012. Wikidata: A new platform for collaborative data collection. In Proceedings of the 21st International Conference on World Wide Web. 1063–1064. DOI:
[11]
Alexander Yates, Michele Banko, Matthew Broadhead, Michael J. Cafarella, Oren Etzioni, and Stephen Soderland. 2007. TextRunner: Open information extraction on the web. In Proceedings of Human Language Technologies: The Annual Conference of the North American Chapter of the Association for Computational Linguistics. 25–26. Retrieved from https://aclanthology.org/N07-4013/
[12]
Oren Etzioni, Michael J. Cafarella, Doug Downey, Stanley Kok, Ana-Maria Popescu, Tal Shaked, Stephen Soderland, Daniel S. Weld, and Alexander Yates. 2004. Web-scale information extraction in knowitall: (Preliminary results). In Proceedings of the 13th international conference on World Wide Web. 100–110. DOI:
[13]
Minghui Wu and Xindong Wu. 2019. On big wisdom. Knowledge and Information Systems 58, 1 (2019), 1–8. DOI:
[14]
Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirectional LSTM-CRF models for sequence tagging. arXiv:1508.01991. Retrieved from http://arxiv.org/abs/1508.01991
[15]
Xuezhe Ma and Eduard H. Hovy. 2016. End-to-end sequence labeling via bi-directional LSTM-CNNs-CRF. In Proceedings of the ACL, 2016, Volume 1: Long Papers. DOI:
[16]
Peng Xu and Denilson Barbosa. 2018. Neural fine-grained entity type classification with hierarchy-aware loss. In Proceedings of the NAACL-HLT, 2018, Volume 1 (Long Papers). 16–25. DOI:
[17]
Xiang Ren, Wenqi He, Meng Qu, Clare R. Voss, Heng Ji, and Jiawei Han. 2016. Label noise reduction in entity typing by heterogeneous partial-label embedding. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 1825–1834. DOI:
[18]
Octavian-Eugen Ganea and Thomas Hofmann. 2017. Deep joint entity disambiguation with local neural attention. In Proceedings of the EMNLP, 2017. 2619–2629. DOI:
[19]
Phong Le and Ivan Titov. 2018. Improving entity linking by modeling latent relations between mentions. In Proceedings of the ACL, 2018, Volume 1: Long Papers. 1595–1604. DOI:
[20]
Kenton Lee, Luheng He, Mike Lewis, and Luke Zettlemoyer. 2017. End-to-end neural coreference resolution. In Proceedings of the EMNLP, 2017. 188–197. DOI:
[21]
Daojian Zeng, Kang Liu, Yubo Chen, and Jun Zhao. 2015. Distant supervision for relation extraction via piecewise convolutional neural networks. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. 1753–1762. DOI:
[22]
Peng Zhou, Wei Shi, Jun Tian, Zhenyu Qi, Bingchen Li, Hongwei Hao, and Bo Xu. 2016. Attention-based bidirectional long short-term memory networks for relation classification. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short papers). DOI:
[23]
Hongming Zhang, Daniel Khashabi, Yangqiu Song, and Dan Roth. 2020. TransOMCS: From linguistic graphs to commonsense knowledge. In Proceedings of the IJCAI, 2020. 4004–4010. DOI:
[24]
Hongming Zhang, Xin Liu, Haojie Pan, Haowen Ke, Jiefu Ou, Tianqing Fang, and Yangqiu Song. 2022. ASER: Towards large-scale commonsense knowledge acquisition via higher-order selectional preference over eventualities. Artif. Intell. 309 (2022), 103740. DOI:
[25]
Wu Xin-Dong, Sheng Shao-Jing, Jiang Ting-Ting, Bu Chen-Yang, and Wu Ming-Hui. 2020. Huapu-CP: from knowledge graphs to a data central-platform. Acta Automatica Sinica 46, 10 (2020), 2045–2059.
[26]
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the NAACL-HLT, 2019, Volume 1 (Long and Short Papers). 4171–4186. DOI:
[27]
Yuan Yao, Deming Ye, Peng Li, Xu Han, Yankai Lin, Zhenghao Liu, Zhiyuan Liu, Lixin Huang, Jie Zhou, and Maosong Sun. 2019. DocRED: A large-scale document-level relation extraction dataset. In Proceedings of the ACL, 2019, Volume 1: Long Papers. 764–777. DOI:
[28]
Sebastian Riedel, Limin Yao, and Andrew McCallum. 2010. Modeling relations and their mentions without labeled text. In Machine Learning and Knowledge Discovery in Databases: European Conference, ECML PKDD 2010, Barcelona, Spain, September 20-24, 2010, Proceedings, Part III 21. 148–163. DOI:
[29]
Xiaozhi Wang, Xu Han, Yankai Lin, Zhiyuan Liu, and Maosong Sun. 2018. Adversarial multi-lingual neural relation extraction. In Proceedings of the 27th International Conference on Computational Linguistics. 1156–1166. Retrieved from https://aclanthology.org/C18-1099/
[30]
Zhen Han, Peng Chen, Yunpu Ma, and Volker Tresp. 2021. Explainable subgraph reasoning for forecasting on temporal knowledge graphs. In Proceedings of the International Conference on Learning Representations. Retrieved from https://openreview.net/forum?id=pGIHq1m7PU
[31]
Tianwen Jiang, Tong Zhao, Bing Qin, Ting Liu, Nitesh V. Chawla, and Meng Jiang. 2019. Multi-input multi-output sequence labeling for joint extraction of fact and condition tuples from scientific text. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. 302–312. DOI:
[32]
Abhishek Pradhan, Ketan Kumar Todi, Anbarasan Selvarasu, and Atish Sanyal. 2020. Knowledge graph generation with deep active learning. In Proceedings of the International Joint Conference on Neural Networks. 1–8. DOI:
[33]
Xindong Wu, Xingquan Zhu, Gong-Qing Wu, and Wei Ding. 2014. Data mining with big data. IEEE Transactions on Knowledge and Data Engineering 26, 1 (2014), 97–107. DOI:
[34]
Tapas Nayak, Navonil Majumder, Pawan Goyal, and Soujanya Poria. 2021. Deep neural approaches to relation triplets extraction: A comprehensive survey. Cognitive Computation 13, 5 (2021), 1215–1232. DOI:
[35]
Sachin Pawar, Pushpak Bhattacharyya, and Girish K. Palshikar. 2021. Techniques for jointly extracting entities and relations: A survey. arXiv:2103.06118. Retrieved from https://arxiv.org/abs/2103.06118
[36]
Siddhant Arora. 2020. A survey on graph neural networks for knowledge graph completion. arXiv:2007.12374. Retrieved from https://arxiv.org/abs/2007.12374
[37]
Borui Cai, Yong Xiang, Longxiang Gao, He Zhang, Yunfeng Li, and Jianxin Li. 2022. Temporal knowledge graph completion: A survey. arXiv:2201.08236 Retrieved from https://arxiv.org/abs/2201.08236
[38]
Quan Wang, Zhendong Mao, Bin Wang, and Li Guo. 2017. Knowledge graph embedding: A survey of approaches and applications. IEEE Transactions on Knowledge and Data Engineering 29, 12 (2017), 2724–2743. DOI:
[39]
Lisa Ehrlinger and Wolfram Wöß. 2016. Towards a definition of knowledge graphs. In Proceedings of the SEMANTiCS, SuCCESS’16, 2016 (CEUR Workshop Proceedings). Retrieved from http://ceur-ws.org/Vol-1695/paper4.pdf
[40]
Sören Auer, Christian Bizer, Georgi Kobilarov, Jens Lehmann, Richard Cyganiak, and Zachary G. Ives. 2007. DBpedia: A nucleus for a web of open data. In Proceedings of the International Semantic Web Conference. 722–735. DOI:
[41]
Wentao Wu, Hongsong Li, Haixun Wang, and Kenny Qili Zhu. 2012. Probase: A probabilistic taxonomy for text understanding. In Proceedings of the 2012 ACM SIGMOD International Conference on Management of Data. 481–492. DOI:
[42]
Zhigang Wang, Juanzi Li, Zhichun Wang, Shuangjie Li, Mingyang Li, Dongsheng Zhang, Yao Shi, Yongbin Liu, Peng Zhang, and Jie Tang. 2013. XLore: A large-scale english-chinese bilingual knowledge graph. In Proceedings of the ISWC (CEUR Workshop Proceedings). 121–124. Retrieved from http://ceur-ws.org/Vol-1035/iswc2013_demo_31.pdf
[43]
George A. Miller. 1995. WordNet: A lexical database for english. Communications of the ACM 38, 11 (1995), 39–41. DOI:
[44]
Cynthia Matuszek, John Cabral, Michael J. Witbrock, and John DeOliveira. 2006. An introduction to the syntax and content of cyc. In Proceedings of the Papers from the 2006 AAAI Spring Symposium, Technical Report SS-06-05, 2006. 44–49. Retrieved from http://www.aaai.org/Library/Symposia/Spring/2006/ss06-05-007.php
[45]
Thomas Steiner, Ruben Verborgh, Raphaël Troncy, Joaquim Gabarró, and Rik Van de Walle. 2012. Adding realtime coverage to the google knowledge graph. In Proceedings of the 11th International Semantic Web Conference. Retrieved from http://ceur-ws.org/Vol-914/paper_2.pdf
[46]
David S. Wishart, Craig Knox, Anchi Guo, Savita Shrivastava, Murtaza Hassanali, Paul Stothard, Zhan Chang, and Jennifer Woolsey. 2006. DrugBank: A comprehensive resource for in Silico Drug Discovery and Exploration. Nucleic Acids Res. 34, Database-Issue (2006), 668–672. DOI:
[47]
Wu Gong-Qing, Hu Jun, Li Li, Xu Zhe-Hao, Liu Peng-Cheng, Hu Xue-Gang, and Wu Xin-Dong. 2016. Online web news extraction via tag path feature fusion. ruan jian xue bao. Journal of Software 27, 3 (2016), 714–735.
[48]
Yanzeng Li and Lei Zou. 2022. gBuilder: A scalable knowledge graph construction system for unstructured corpus. arXiv:arXiv:2208.09705. Retrieved from https://arxiv.org/abs/2208.09705
[49]
Xu Han, Shulin Cao, Xin Lv, Yankai Lin, Zhiyuan Liu, Maosong Sun, and Juanzi Li. 2018. OpenKE: An open toolkit for knowledge embedding. In Proceedings of the EMNLP, 2018. 139–144. DOI:
[50]
Zequn Sun, Qingheng Zhang, Wei Hu, Chengming Wang, Muhao Chen, Farahnaz Akrami, and Chengkai Li. 2020. A benchmarking study of embedding-based entity alignment for knowledge graphs. Proceedings of the VLDB Endowment 13, 11 (2020), 2326–2340. Retrieved from http://www.vldb.org/pvldb/vol13/p2326-sun.pdf
[51]
Guodong Zhou and Jian Su. 2002. Named entity recognition using an HMM-based chunk tagger. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics. 473–480. DOI:
[52]
Jenny Rose Finkel, Trond Grenager, and Christopher D. Manning. 2005. Incorporating non-local information into information extraction systems by gibbs sampling. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics. 363–370. DOI:
[53]
Jun Zhu, Zaiqing Nie, Ji-Rong Wen, Bo Zhang, and Wei-Ying Ma. 2005. 2D conditional random fields for web information extraction. In Proceedings of the 22nd International Conference on Machine Learning. 1044–1051. DOI:
[54]
Charles Sutton, Khashayar Rohanimanesh, and Andrew McCallum. 2004. Dynamic conditional random fields: Factorized probabilistic models for labeling and segmenting sequence data. In Proceedings of the 21st International Conference on Machine Learning. DOI:
[55]
Jun Zhu, Zaiqing Nie, Ji-Rong Wen, Bo Zhang, and Wei-Ying Ma. 2006. Simultaneous record detection and attribute labeling in web data extraction. In Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 494–503. DOI:
[56]
Aidan Finn and Nicholas Kushmerick. 2004. Multi-level boundary classification for information extraction. In Proceedings of the European Conference on Machine Learning. 111–122. DOI:
[57]
Sheng Zhang, Kevin Duh, and Benjamin Van Durme. 2018. Fine-grained entity typing through increased discourse context and adaptive classification thresholds. In Proceedings of the *SEM@NAACL-HLT, 2018. 173–179. DOI:
[58]
Jing Li, Aixin Sun, Jianglei Han, and Chenliang Li. 2022. A survey on deep learning for named entity recognition. IEEE Transactions on Knowledge and Data Engineering. 34, 1 (2022), 50–70. DOI:
[59]
Ronan Collobert, Jason Weston, Léon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel P. Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research 12 (2011), 2493–2537. Retrieved from http://dl.acm.org/citation.cfm?id=2078186
[60]
Lishuang Li, Liuke Jin, Zhenchao Jiang, Dingxin Song, and Degen Huang. 2015. Biomedical named entity recognition based on extended recurrent neural networks. In Proceedings of the 2015 IEEE International Conference on Bioinformatics and Biomedicine. 649–652. DOI:
[61]
Thien Huu Nguyen, Avirup Sil, Georgiana Dinu, and Radu Florian. 2016. Toward mention detection robustness with recurrent neural networks. arXiv:1602.07749. Retrieved from http://arxiv.org/abs/1602.07749
[62]
Emma Strubell, Patrick Verga, David Belanger, and Andrew McCallum. 2017. Fast and accurate entity recognition with iterated dilated convolutions. In Proceedings of the EMNLP, 2017. 2670–2680. DOI:
[63]
Ling Luo, Zhihao Yang, Pei Yang, Yin Zhang, Lei Wang, Hongfei Lin, and Jian Wang. 2018. An attention-based BiLSTM-CRF approach to document-level chemical named entity recognition. Bioinformatics 34, 8 (2018), 1381–1388. DOI:
[64]
Andrej Zukov Gregoric, Yoram Bachrach, Pasha Minkovsky, Sam Coope, and Bogdan Maksak. 2017. Neural named entity recognition using a self-attention mechanism. In Proceedings of the 2017 IEEE 29th International Conference on Tools with Artificial Intelligence. 652–656. DOI:
[65]
Alberto Cetoli, Stefano Bragaglia, Andrew D. O’Harney, and Marc Sloan. 2018. Graph convolutional networks for named entity recognition. In Proceedings of the TLT, 2018. 37–45. Retrieved from https://aclanthology.org/W17-7607/
[66]
Cihan Dogan, Aimore Dutra, Adam Gara, Alfredo Gemma, Lei Shi, Michael Sigamani, and Ella Walters. 2019. Fine-grained named entity recognition using ELMo and wikidata. arXiv:1904.10503. Retrieved from http://arxiv.org/abs/1904.10503
[67]
Mingyi Liu, Zhiying Tu, Tong Zhang, Tonghua Su, Xiaofei Xu, and Zhongjie Wang. 2022. LTP: A new active learning strategy for CRF-based named entity recognition. Neural Processing Letters (2022), 1–22.
[68]
Ikuya Yamada, Akari Asai, Hiroyuki Shindo, Hideaki Takeda, and Yuji Matsumoto. 2020. LUKE: Deep contextualized entity representations with entity-aware self-attention. In Proceedings of the EMNLP, 2020. 6442–6454. DOI:
[69]
Sonse Shimaoka, Pontus Stenetorp, Kentaro Inui, and Sebastian Riedel. 2017. Neural architectures for fine-grained entity type classification. In Proceedings of the EACL, 2017, Volume 1: Long Papers. 1271–1280. DOI:
[70]
Varish Mulwad, Tim Finin, Zareen Syed, and Anupam Joshi. 2010. Using linked data to interpret tables. In Proceedings of the 1st International Workshop on Consuming Linked Data, 2010 (CEUR Workshop Proceedings). Retrieved from http://ceur-ws.org/Vol-665/MulwadEtAl_COLD2010.pdf
[71]
Girija Limaye, Sunita Sarawagi, and Soumen Chakrabarti. 2010. Annotating and searching web tables using entities, types and relationships. Proceedings of the VLDB Endowment 3, 1 (2010), 1338–1347. DOI:
[72]
Chandra Sekhar Bhagavatula, Thanapon Noraset, and Doug Downey. 2015. TabEL: Entity linking in web tables. In Proceedings of the International Semantic Web Conference. 425–441. DOI:
[73]
Tianxing Wu, Shengjia Yan, Zhixin Piao, Liang Xu, Ruiming Wang, and Guilin Qi. 2016. Entity linking in web tables with multiple linked knowledge bases. In Semantic Technology: 6th Joint International Conference, JIST 2016, Singapore, Singapore, November 2-4, 2016, Revised Selected Papers 6. 239–253. DOI:
[74]
Vasilis Efthymiou, Oktie Hassanzadeh, Mariano Rodriguez-Muro, and Vassilis Christophides. 2017. Matching web tables with knowledge base entities: From entity lookups to entity embeddings. In The Semantic WebISWC 2017: 16th International Semantic Web Conference, Vienna, Austria, October 2125, 2017, Proceedings, Part I 16. 260–277. DOI:
[75]
Nicola De Cao, Gautier Izacard, Sebastian Riedel, and Fabio Petroni. 2021. Autoregressive entity retrieval. In Proceedings of the ICLR. Retrieved from https://openreview.net/forum?id=5k8F6UU39V
[76]
Jie Cai and Michael Strube. 2010. End-to-end coreference resolution via hypergraph partitioning. In Proceedings of the COLING, 2010. 143–151. Retrieved from https://aclanthology.org/C10-1017/
[77]
Emili Sapena, Lluís Padró, and Jordi Turmo. 2013. A constraint-based hypergraph partitioning approach to coreference resolution. Computational Linguistics 39, 4 (2013), 847–884. DOI:
[78]
David L. Bean and Ellen Riloff. 2004. Unsupervised learning of contextual role knowledge for coreference resolution. In Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics: HLT-NAACL 2004. 297–304. Retrieved from https://aclanthology.org/N04-1038/
[79]
Eraldo R. Fernandes, Cícero Nogueira dos Santos, and Ruy Luiz Milidiú. 2012. Latent structure perceptron with feature induction for unrestricted coreference resolution. In Proceedings of the Joint Conference on EMNLP and CoNLL-Shared Task 2012. 41–48. Retrieved from https://aclanthology.org/W12-4502/
[80]
Xue-Feng Xi, Guodong Zhou, Fuyuan Hu, and Baochuan Fu. 2015. A convolutional deep neural network for coreference resolution via modeling hierarchical features. In Intelligence Science and Big Data Engineering. Big Data and Machine Learning Techniques: 5th International Conference, IScIDE 2015, Suzhou, China, June 14-16, 2015, Revised Selected Papers, Part II 5. 361–372. DOI:
[81]
Jheng-Long Wu and Wei-Yun Ma. 2017. A deep learning framework for coreference resolution based on convolutional neural network. In Proceedings of the 2017 IEEE 11th International Conference on Semantic Computing. 61–64. DOI:
[82]
Sam Wiseman, Alexander M. Rush, and Stuart M. Shieber. 2016. Learning global features for coreference resolution. In Proceedings of the NAACL-HLT, 2016. 994–1004. DOI:
[83]
Jia-Chen Gu, Zhen-Hua Ling, and Nitin Indurkhya. 2018. A study on improving end-to-end neural coreference resolution. In Chinese Computational Linguistics and Natural Language Processing Based on Naturally Annotated Big Data: 17th China National Conference, CCL 2018, and 6th International Symposium, NLP-NABD 2018, Changsha, China, October 1921, 2018, Proceedings 17. 159–169. DOI:
[84]
Rui Zhang, Cícero Nogueira dos Santos, Michihiro Yasunaga, Bing Xiang, and Dragomir R. Radev. 2018. Neural coreference resolution with deep biaffine attention by joint mention detection and mention clustering. In Proceedings of the ACL. 102–107. DOI:
[85]
Jie Ma, Jun Liu, Yufei Li, Xin Hu, Yudai Pan, Shen Sun, and Qika Lin. 2020. Jointly optimized neural coreference resolution with mutual attention. In Proceedings of the 13th International Conference on Web Search and Data Mining. 402–410. DOI:
[86]
Kevin Clark and Christopher D. Manning. 2016. Deep reinforcement learning for mention-ranking coreference models. In Proceedings of the EMNLP, 2016. 2256–2262. DOI:
[87]
Daojian Zeng, Kang Liu, Siwei Lai, Guangyou Zhou, and Jun Zhao. 2014. Relation classification via convolutional deep neural network. In Proceedings of the COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers. 2335–2344. Retrieved from https://aclanthology.org/C14-1220/
[88]
Thien Huu Nguyen and Ralph Grishman. 2015. Relation extraction: Perspective from convolutional neural networks. In Proceedings of the 1st Workshop on Vector Space Modeling for Natural Language Processing. 39–48. DOI:
[89]
Yatian Shen and Xuanjing Huang. 2016. Attention-based convolutional neural network for semantic relation extraction. In Proceedings of the COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers. 2526–2536. Retrieved from https://aclanthology.org/C16-1238/
[90]
Linlin Wang, Zhu Cao, Gerard de Melo, and Zhiyuan Liu. 2016. Relation classification via multi-level attention CNNs. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). DOI:
[91]
Makoto Miwa and Mohit Bansal. 2016. End-to-end relation extraction using LSTMs on sequences and tree structures. In Proceedings of the ACL, 2016, Volume 1: Long Papers. DOI:
[92]
Yi Zhao, Huaiyu Wan, Jianwei Gao, and Youfang Lin. 2019. Improving relation classification by entity pair graph. In Proceedings of the Asian Conference on Machine Learning. 1156–1171. Retrieved from http://proceedings.mlr.press/v101/zhao19a.html
[93]
Zhijiang Guo, Yan Zhang, and Wei Lu. 2019. Attention guided graph convolutional networks for relation extraction. In Proceedings of the ACL,2019, Volume 1: Long Papers. 241–251. DOI:
[94]
Kang Zhao, Hua Xu, Yue Cheng, Xiaoteng Li, and Kai Gao. 2021. Representation iterative fusion based on heterogeneous graph neural network for joint entity and relation extraction. Knowledge-Based Systems 219 (2021), 106888. DOI:
[95]
Amir D. N. Cohen, Shachar Rosenman, and Yoav Goldberg. 2020. Relation extraction as two-way span-prediction. arXiv:2010.04829. Retrieved from https://arxiv.org/abs/2010.04829
[96]
Razvan C. Bunescu and Raymond J. Mooney. 2007. Learning to extract relations from the web using minimal supervision. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics. Retrieved from https://aclanthology.org/P07-1073/
[97]
Frank Reichartz, Hannes Korte, and Gerhard Paass. 2010. Semantic relation extraction with kernels over typed dependency trees. In Proceedings of the 16th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 773–782. DOI:
[98]
Dmitry Zelenko, Chinatsu Aone, and Anthony Richardella. 2002. Kernel methods for relation extraction. In Proceedings of the EMNLP. 71–78. DOI:
[99]
Nicholas FitzGerald, Oscar Täckström, Kuzman Ganchev, and Dipanjan Das. 2015. Semantic role labeling with neural network factors. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. 960–970. DOI:
[100]
Michael Roth and Mirella Lapata. 2016. Neural semantic role labeling with dependency path embeddings. In Proceedings of the ACL. DOI:
[101]
Keshav Kolluru, Samarth Aggarwal, Vipul Rathore, Mausam, and Soumen Chakrabarti. 2020. IMoJIE: Iterative memory-based joint open information extraction. In Proceedings of the ACL,2020. 5871–5886. DOI:
[102]
Ruidong Wu, Yuan Yao, Xu Han, Ruobing Xie, Zhiyuan Liu, Fen Lin, Leyu Lin, and Maosong Sun. 2019. Open relation extraction: Relational knowledge transfer from supervised data to unsupervised data. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. 219–228. DOI:
[103]
Varish Mulwad, Tim Finin, and Anupam Joshi. 2013. Semantic message passing for generating linked data from tables. In The Semantic WebISWC 2013: 12th International Semantic Web Conference, Sydney, NSW, Australia, October 21-25, 2013, Proceedings, Part I 12. 363–378. DOI:
[104]
Zhe Chen and Michael J. Cafarella. 2013. Automatic web spreadsheet data extraction. In Proceedings of the 3RD International Workshop on Semantic Search over the Web. 1:1–1:8. DOI:
[105]
Jun Zhu, Zaiqing Nie, Xiaojiang Liu, Bo Zhang, and Ji-Rong Wen. 2009. StatSnowball: A statistical approach to extracting entity relationships. In Proceedings of the 18th International Conference on World Wide Web. 101–110. DOI:
[106]
Yankai Lin, Shiqi Shen, Zhiyuan Liu, Huanbo Luan, and Maosong Sun. 2016. Neural relation extraction with selective attention over instances. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). DOI:
[107]
Xiaotian Jiang, Quan Wang, Peng Li, and Bin Wang. 2016. Relation extraction with multi-instance multi-label convolutional neural networks. In Proceedings of the COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers. 1471–1480. Retrieved from https://aclanthology.org/C16-1139/
[108]
Bingfeng Luo, Yansong Feng, Zheng Wang, Zhanxing Zhu, Songfang Huang, Rui Yan, and Dongyan Zhao. 2017. Learning with noise: Enhance distantly supervised relation extraction with dynamic transition matrix. In Proceedings of the ACL, 2017, Volume 1: Long Papers. 430–439. DOI:
[109]
Pengda Qin, Weiran Xu, and William Yang Wang. 2018. Robust distant supervision relation extraction via deep reinforcement learning. In Proceedings of the ACL, 2018, Volume 1: Long Papers. 2137–2147. DOI:
[110]
Pengda Qin, Weiran Xu, and William Yang Wang. 2018. DSGAN: Generative adversarial training for distant supervision relation extraction. In Proceedings of the ACL, 2018, Volume 1: Long Papers. 496–505. DOI:
[111]
Yuyun Huang and Jinhua Du. 2019. Self-attention enhanced CNNs and collaborative curriculum learning for distantly supervised relation extraction. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. 389–398. DOI:
[112]
Xu Han, Pengfei Yu, Zhiyuan Liu, Maosong Sun, and Peng Li. 2018. Hierarchical relation extraction with coarse-to-fine grained attention. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. 2236–2245. DOI:
[113]
Zhi-Xiu Ye and Zhen-Hua Ling. 2019. Distant supervision relation extraction with intra-bag and inter-bag attentions. In Proceedings of the NAACL-HLT, 2019, Volume 1 (Long and Short Papers). 2810–2819. DOI:
[114]
Yujin Yuan, Liyuan Liu, Siliang Tang, Zhongfei Zhang, Yueting Zhuang, Shiliang Pu, Fei Wu, and Xiang Ren. 2019. Cross-relation cross-bag attention for distantly-supervised relation extraction. In Proceedings of the AAAI Conference on Artificial Intelligence. 419–426. DOI:
[115]
Guoliang Ji, Kang Liu, Shizhu He, and Jun Zhao. 2017. Distant supervision for relation extraction with sentence-level attention and entity descriptions. In Proceedings of the AAAI Conference on Artificial Intelligence. 3060–3066. Retrieved from http://aaai.org/ocs/index.php/AAAI/AAAI17/paper/view/14491
[116]
Shikhar Vashishth, Rishabh Joshi, Sai Suman Prayaga, Chiranjib Bhattacharyya, and Partha P. Talukdar. 2018. RESIDE: Improving distantly-supervised neural relation extraction using side information. In Proceedings of the EMNLP, 2018. 1257–1266. DOI:
[117]
Ningyu Zhang, Shumin Deng, Zhanlin Sun, Guanying Wang, Xi Chen, Wei Zhang, and Huajun Chen. 2019. Long-tail relation extraction via knowledge graph embeddings and graph convolution networks. In Proceedings of the NAACL-HLT. 3016–3025. DOI:
[118]
Lei Cui, Furu Wei, and Ming Zhou. 2018. Neural open information extraction. In Proceedings of the ACL. 407–413. DOI:
[119]
Mike Mintz, Steven Bills, Rion Snow, and Daniel Jurafsky. 2009. Distant supervision for relation extraction without labeled data. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP. 1003–1011. Retrieved from https://aclanthology.org/P09-1113/
[120]
Jake Snell, Kevin Swersky, and Richard S. Zemel. 2017. Prototypical networks for few-shot learning. In Advances in Neural Information Processing Systems 30, (2017), 4077–4087. Retrieved from https://proceedings.neurips.cc/paper/2017/hash/cb8da6767461f2812ae4290eac7cbc42-Abstract.html
[121]
Tianyu Gao, Xu Han, Ruobing Xie, Zhiyuan Liu, Fen Lin, Leyu Lin, and Maosong Sun. 2020. Neural snowball for few-shot relation learning. In Proceedings of the AAAI Conference on Artificial Intelligence. 7772–7779. Retrieved from https://aaai.org/ojs/index.php/AAAI/article/view/6281
[122]
Oriol Vinyals, Charles Blundell, Tim Lillicrap, Koray Kavukcuoglu, and Daan Wierstra. 2016. Matching networks for one shot learning. In Advances in Neural Information Processing Systems 29, (2016), 3630–3638. Retrieved from https://proceedings.neurips.cc/paper/2016/hash/90e1357833654983612fb05e3ec9148c-Abstract.html
[123]
Wenhan Xiong, Mo Yu, Shiyu Chang, Xiaoxiao Guo, and William Yang Wang. 2018. One-shot relational learning for knowledge graphs. In Proceedings of the EMNLP, 2018. 1980–1990. DOI:
[124]
Chelsea Finn, Pieter Abbeel, and Sergey Levine. 2017. Model-agnostic meta-learning for fast adaptation of deep networks. In Proceedings of the International Conference on Machine Learning. 1126–1135. Retrieved from http://proceedings.mlr.press/v70/finn17a.html
[125]
Tsendsuren Munkhdalai and Hong Yu. 2017. Meta networks. In Proceedings of the International Conference on Machine Learning. 2554–2563. Retrieved from http://proceedings.mlr.press/v70/munkhdalai17a.html
[126]
Tongtong Wu, Xuekai Li, Yuan-Fang Li, Gholamreza Haffari, Guilin Qi, Yujin Zhu, and Guoqiang Xu. 2021. Curriculum-meta learning for order-robust continual relation extraction. In Proceedings of the AAAI Conference on Artificial Intelligence. 10363–10369. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/17241
[127]
Miao Fan, Yeqi Bai, Mingming Sun, and Ping Li. 2019. Large margin prototypical network for few-shot relation classification with fine-grained features. In Proceedings of the 28th ACM International Conference on Information and Knowledge Management. 2353–2356. DOI:
[128]
Tianyu Gao, Xu Han, Zhiyuan Liu, and Maosong Sun. 2019. Hybrid attention-based prototypical networks for noisy few-shot relation classification. In Proceedings of the AAAI Conference on Artificial Intelligence. 6407–6414. DOI:
[129]
Zhi-Xiu Ye and Zhen-Hua Ling. 2019. Multi-level matching and aggregation network for few-shot relation classification. In Proceedings of the ACL, 2019, Volume 1: Long Papers. 2872–2881. DOI:
[130]
Tianyu Gao, Xu Han, Hao Zhu, Zhiyuan Liu, Peng Li, Maosong Sun, and Jie Zhou. 2019. FewRel 2.0: Towards more challenging few-shot relation classification. In Proceedings of the EMNLP-IJCNLP, 2019. 6249–6254. DOI:
[131]
Suncong Zheng, Yuexing Hao, Dongyuan Lu, Hongyun Bao, Jiaming Xu, Hongwei Hao, and Bo Xu. 2017. Joint entity and relation extraction based on a hybrid neural network. Neurocomputing 257 (2017), 59–66. DOI:
[132]
Tsu-Jui Fu, Peng-Hsuan Li, and Wei-Yun Ma. 2019. GraphRel: Modeling text as relational graphs for joint entity and relation extraction. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. 1409–1418. DOI:
[133]
Zhepei Wei, Jianlin Su, Yue Wang, Yuan Tian, and Yi Chang. 2020. A novel cascade binary tagging framework for relational triple extraction. In Proceedings of the ACL, 2020. 1476–1488. DOI:
[134]
Suncong Zheng, Feng Wang, Hongyun Bao, Yuexing Hao, Peng Zhou, and Bo Xu. 2017. Joint extraction of entities and relations based on a novel tagging scheme. In Proceedings of the ACL 2017, Volume 1: Long Papers. 1227–1236. DOI:
[135]
Yucheng Wang, Bowen Yu, Yueyang Zhang, Tingwen Liu, Hongsong Zhu, and Limin Sun. 2020. TPLinker: Single-stage joint extraction of entities and relations through token pair linking. In Proceedings of the COLING, 2020. 1572–1582. DOI:
[136]
Giannis Bekoulis, Johannes Deleu, Thomas Demeester, and Chris Develder. 2018. Joint entity recognition and relation extraction as a multi-head selection problem. Expert Systems with Applications 114 (2018), 34–45. DOI:
[137]
Xiaoya Li, Fan Yin, Zijun Sun, Xiayu Li, Arianna Yuan, Duo Chai, Mingxin Zhou, and Jiwei Li. 2019. Entity-relation extraction as multi-turn question answering. In Proceedings of the ACL, 2019, Volume 1: Long Papers. 1340–1350. DOI:
[138]
Hao Chen, Chenwei Zhang, Jun Li, Philip S. Yu, and Ning Jing. 2022. KGGen: A generative approach for incipient knowledge graph population. IEEE Transactions on Knowledge and Data Engineering 34, 5 (2022), 2254–2267. DOI:
[139]
Nanyun Peng, Hoifung Poon, Chris Quirk, Kristina Toutanova, and Wen-tau Yih. 2017. Cross-sentence n-ary relation extraction with graph LSTMs. Transactions of the Association for Computational Linguistics 5 (2017), 101–115. Retrieved from https://transacl.org/ojs/index.php/tacl/article/view/1028
[140]
Linfeng Song, Yue Zhang, Zhiguo Wang, and Daniel Gildea. 2018. N-ary relation extraction using graph-state LSTM. In Proceedings of the EMNLP, 2018. 2226–2235. DOI:
[141]
Sunil Kumar Sahu, Fenia Christopoulou, Makoto Miwa, and Sophia Ananiadou. 2019. Inter-sentence relation extraction with document-level graph convolutional neural network. In Proceedings of the ACL, 2019, Volume 1: Long Papers. 4309–4316. DOI:
[142]
Hao Zhu, Yankai Lin, Zhiyuan Liu, Jie Fu, Tat-Seng Chua, and Maosong Sun. 2019. Graph neural networks with generated parameters for relation extraction. In Proceedings of the ACL, 2019, Volume 1: Long Papers. 1331–1339. DOI:
[143]
Fenia Christopoulou, Makoto Miwa, and Sophia Ananiadou. 2019. Connecting the dots: Document-level neural relation extraction with edge-oriented graphs. In Proceedings of the EMNLP-IJCNLP, 2019. 4924–4935. DOI:
[144]
Yi Luan, Dave Wadden, Luheng He, Amy Shah, Mari Ostendorf, and Hannaneh Hajishirzi. 2019. A general framework for information extraction using dynamic span graphs. In Proceedings of the NAACL-HLT, 2019, Volume 1 (Long and Short Papers). 3036–3046. DOI:
[145]
Guoshun Nan, Zhijiang Guo, Ivan Sekulic, and Wei Lu. 2020. Reasoning with latent structure refinement for document-level relation extraction. In Proceedings of the ACL, 2020. 1546–1557. DOI:
[146]
Shuang Zeng, Runxin Xu, Baobao Chang, and Lei Li. 2020. Double graph based reasoning for document-level relation extraction. In Proceedings of the EMNLP, 2020. 1630–1640. DOI:
[147]
Wang Xu, Kehai Chen, and Tiejun Zhao. 2021. Document-level relation extraction with reconstruction. In Proceedings of the AAAI Conference on Artificial Intelligence. 14167–14175. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/17667
[148]
Wang Xu, Kehai Chen, and Tiejun Zhao. 2021. Discriminative reasoning for document-level relation extraction. In Proceedings of the ACL/IJCNLP, 2021. 1653–1663. DOI:
[149]
Zhenyu Zhang, Bowen Yu, Xiaobo Shu, Mengge Xue, Tingwen Liu, and Li Guo. 2021. From what to why: Improving relation extraction with rationale graph. In Findings of the Association for Computational Linguistics: ACL/IJCNLP.
[150]
Ningyu Zhang, Xiang Chen, Xin Xie, Shumin Deng, Chuanqi Tan, Mosha Chen, Fei Huang, Luo Si, and Huajun Chen. 2021. Document-level relation extraction as semantic segmentation. In Proceedings of the IJCAI,2021. 3999–4006. DOI:
[151]
Wenxuan Zhou, Kevin Huang, Tengyu Ma, and Jing Huang. 2021. Document-level relation extraction with adaptive thresholding and localized context pooling. In Proceedings of the AAAI Conference on Artificial Intelligence. 14612–14620. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/17717
[152]
Qingyu Tan, Ruidan He, Lidong Bing, and Hwee Tou Ng. 2022. Document-level relation extraction with adaptive focal loss and knowledge distillation. In Proceedings of the ACL. 1672–1681. DOI:
[153]
Antoine Bosselut, Hannah Rashkin, Maarten Sap, Chaitanya Malaviya, Asli Celikyilmaz, and Yejin Choi. 2019. COMET: Commonsense transformers for automatic knowledge graph construction. In Proceedings of the ACL, 2019, Volume 1: Long Papers. 4762–4779. DOI:
[154]
Changsung Moon, Paul Jones, and Nagiza F. Samatova. 2017. Learning entity type embeddings for knowledge graph completion. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management. 2215–2218. DOI:
[155]
Guoliang Ji, Kang Liu, Shizhu He, and Jun Zhao. 2016. Knowledge graph completion with adaptive sparse transfer matrix. In Proceedings of the AAAI Conference on Artificial Intelligence. 985–991. Retrieved from http://www.aaai.org/ocs/index.php/AAAI/AAAI16/paper/view/11982
[156]
Ivana Balazevic, Carl Allen, and Timothy M. Hospedales. 2019. TuckER: Tensor factorization for knowledge graph completion. In Proceedings of the EMNLP-IJCNLP, 2019. 5184–5193. DOI:
[157]
Richard Socher, Danqi Chen, Christopher D. Manning, and Andrew Y. Ng. 2013. Reasoning with neural tensor networks for knowledge base completion. In Proceedings of the Advances in Neural Information Processing Systems. 926–934. Retrieved from https://proceedings.neurips.cc/paper/2013/hash/b337e84de8752b27eda3a12363109e80-Abstract.html
[158]
Zhanqiu Zhang, Jianyu Cai, Yongdong Zhang, and Jie Wang. 2020. Learning hierarchy-aware knowledge graph embeddings for link prediction. In Proceedings of the AAAI Conference on Artificial Intelligence. 3065–3072. Retrieved from https://aaai.org/ojs/index.php/AAAI/article/view/5701
[159]
Guanglin Niu, Bo Li, Yongfei Zhang, and Shiliang Pu. 2022. CAKE: A scalable commonsense-aware framework for multi-view knowledge graph completion. In arXiv:2202.13785. Retrieved from https://arxiv.org/abs/2202.13785
[160]
Liang Wang, Wei Zhao, Zhuoyu Wei, and Jingming Liu. 2022. SimKGC: Simple contrastive knowledge graph completion with pre-trained language models. In Proc. ACL. DOI:
[161]
Kai Wang, Yu Liu, and Quan Z. Sheng. 2022. Swift and sure: Hardness-aware contrastive learning for low-dimensional knowledge graph embeddings. In Proceedings of the ACM Web Conference 2022. 838–849. DOI:
[162]
Agustín Borrego, Daniel Ayala, Inma Hernández, Carlos R. Rivero, and David Ruiz. 2021. CAFE: Knowledge graph completion using neighborhood-aware features. Engineering Applications of Artificial Intelligence 103 (2021), 104302. DOI:
[163]
Junkang Wu, Wentao Shi, Xuezhi Cao, Jiawei Chen, Wenqiang Lei, Fuzheng Zhang, Wei Wu, and Xiangnan He. 2021. DisenKGAT: Knowledge graph embedding with disentangled graph attention network. In Proceedings of the 30th ACM International Conference on Information and Knowledge Management. 2140–2149. DOI:
[164]
Arvind Neelakantan, Benjamin Roth, and Andrew McCallum. 2015. Compositional vector space models for knowledge base completion. In ACL, 2015, Volume 1: Long Papers. 156–166. DOI:
[165]
Rajarshi Das, Arvind Neelakantan, David Belanger, and Andrew McCallum. 2017. Chains of reasoning over entities, relations, and text using recurrent neural networks. In Proc. EACL, 2017, Volume 1: Long Papers. 132–141. DOI:
[166]
Wenhan Xiong, Thien Hoang, and William Yang Wang. 2017. DeepPath: A reinforcement learning method for knowledge graph reasoning. In Proc. EMNLP, 2017. 564–573. DOI:
[167]
Xi Victoria Lin, Richard Socher, and Caiming Xiong. 2018. Multi-hop knowledge graph reasoning with reward shaping. In Proc. EMNLP, 2018. 3243–3253. DOI:
[168]
Zixuan Li, Xiaolong Jin, Saiping Guan, Yuanzhuo Wang, and Xueqi Cheng. 2018. Path reasoning over knowledge graph: A multi-agent and reinforcement learning based method. In Proceedings of the 2018 IEEE International Conference on Data Mining Workshops. 929–936. DOI:
[169]
Yelong Shen, Jianshu Chen, Po-Sen Huang, Yuqing Guo, and Jianfeng Gao. 2018. M-Walk: Learning to walk over graphs using monte carlo tree search. In Advances in Neural Information Processing Systems(2018), 6787–6798. Retrieved from https://proceedings.neurips.cc/paper/2018/hash/c6f798b844366ccd65d99bc7f31e0e02-Abstract.html
[170]
Yuyu Zhang, Hanjun Dai, Zornitsa Kozareva, Alexander J. Smola, and Le Song. 2018. Variational reasoning for question answering with knowledge graph. In Proceedings of the AAAI Conference on Artificial Intelligence. 6069–6076. Retrieved from https://www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/view/16983
[171]
Sara Kardani-Moghaddam, Rajkumar Buyya, and Kotagiri Ramamohanarao. 2021. ADRL: A hybrid anomaly-aware deep reinforcement learning-based resource scaling in clouds. IEEE Transactions on Parallel and Distributed Systems 32, 3 (2021), 514–526. DOI:
[172]
Heng Wang, Shuangyin Li, Rong Pan, and Mingzhi Mao. 2019. Incorporating graph attention mechanism into knowledge graph reasoning based on deep reinforcement learning. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. 2623–2631. DOI:
[173]
Mingming Zheng, Yanquan Zhou, and Qingyao Cui. 2021. Hierarchical policy network with multi-agent for knowledge graph reasoning based on reinforcement learning. In Proceedings of the International Conference on Knowledge Science, Engineering and Management. 445–457. DOI:
[174]
Prayag Tiwari, Hongyin Zhu, and Hari Mohan Pandey. 2021. DAPath: Distance-aware knowledge graph reasoning based on deep reinforcement learning. Neural Networks 135 (2021), 1–12. DOI:
[175]
Shuangyin Li, Heng Wang, Rong Pan, and Mingzhi Mao. 2021. MemoryPath: A deep reinforcement learning framework for incorporating memory component into knowledge graph reasoning. Neurocomputing 419 (2021), 273–286. DOI:
[176]
Tim Miller. 2019. Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence 267 (2019), 1–38. DOI:
[177]
Luis Antonio Galárraga, Christina Teflioudi, Katja Hose, and Fabian M. Suchanek. 2013. AMIE: Association rule mining under incomplete evidence in ontological knowledge bases. In Proceedings of the 22nd international conference on World Wide Web. 413–422. DOI:
[178]
Pouya Ghiasnezhad Omran, Kewen Wang, and Zhe Wang. 2018. Scalable rule learning via learning representation. In Proceedings of the IJCAI, 2018. 2149–2155. DOI:
[179]
Fan Yang, Zhilin Yang, and William W. Cohen. 2017. Differentiable learning of logical rules for knowledge base reasoning. In Proceedings of the Advances in Neural Information Processing Systems. 2319–2328. Retrieved from https://proceedings.neurips.cc/paper/2017/hash/0e55666a4ad822e0e34299df3591d979-Abstract.html
[180]
Meng Qu and Jian Tang. 2019. Probabilistic logic neural networks for reasoning. In Proceedings of the Advances in Neural Information Processing Systems. 7710–7720. Retrieved from https://proceedings.neurips.cc/paper/2019/hash/13e5ebb0fa112fe1b31a1067962d74a7-Abstract.html
[181]
Yuyu Zhang, Xinshi Chen, Yuan Yang, Arun Ramamurthy, Bo Li, Yuan Qi, and Le Song. 2020. Efficient probabilistic logic reasoning with graph neural networks. In ICLR. Retrieved from https://openreview.net/forum?id=rJg76kStwH
[182]
Vicente Iván Sánchez Carmona, Tim Rocktäschel, Sebastian Riedel, and Sameer Singh. 2015. Towards extracting faithful and descriptive representations of latent variable models. In Proceedings of the AAAI Spring Symposia Series. Retrieved from http://www.aaai.org/ocs/index.php/SSS/SSS15/paper/view/10304
[183]
Yatin Nandwani, Ankesh Gupta, Aman Agrawal, Mayank Singh Chauhan, Parag Singla, and Mausam. 2020. OxKBC: Outcome explanation for factorization based knowledge base completion. In Proceedings of the Automated Knowledge Base Construction. DOI:
[184]
Zhitao Ying, Dylan Bourgeois, Jiaxuan You, Marinka Zitnik, and Jure Leskovec. 2019. GNNExplainer: Generating explanations for graph neural networks. In Proceedings of the Advances in Neural Information Processing Systems. 9240–9251. Retrieved from https://proceedings.neurips.cc/paper/2019/hash/d80b7040b773199015de6d3b4293c8ff-Abstract.html
[185]
Pouya Pezeshkpour, Yifan Tian, and Sameer Singh. 2019. Investigating robustness and interpretability of link prediction via adversarial modifications. In Proceedings of the NAACL-HLT, 2019, Volume 1 (Long and Short Papers). 3336–3347. DOI:
[186]
Ruobing Xie, Zhiyuan Liu, Fen Lin, and Leyu Lin. 2018. Does william shakespeare REALLY write hamlet? Knowledge representation learning with confidence. In Proceedings of the AAAI Conference on Artificial Intelligence. 4954–4961. Retrieved from https://www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/view/16577
[187]
Tiansi Dong, Zhigang Wang, Juanzi Li, Christian Bauckhage, and Armin B. Cremers. 2019. Triple classification using regions and fine-grained entity typing. In Proceedings of the AAAI Conference on Artificial Intelligence. 77–85. DOI:
[188]
Elvira Amador-Domínguez, Emilio Serrano, Daniel Manrique, Patrick Hohenecker, and Thomas Lukasiewicz. 2021. An ontology-based deep learning approach for triple classification with out-of-knowledge-base entities. Information Sciences 564 (2021), 85–102. DOI:
[189]
Dai Quoc Nguyen, Tuan Nguyen, and Dinh Phung. 2020. A relational memory-based embedding model for triple classification and search personalization. In Proceedings of the ACL, 2020. 3429–3435. DOI:
[190]
Tao Sun, Jiaojiao Zhai, and Qi Wang. 2020. NovEA: A novel model of entity alignment using attribute triples and relation triples. In Knowledge Science, Engineering and Management: 13th International Conference, KSEM 2020, Hangzhou, China, August 2830, 2020, Proceedings, Part I 13. 161–173. DOI:
[191]
Fuzhen He, Zhixu Li, Qiang Yang, An Liu, Guanfeng Liu, Pengpeng Zhao, Lei Zhao, Min Zhang, and Zhigang Chen. 2019. Unsupervised entity alignment using attribute triples and relation triples. In Database Systems for Advanced Applications: 24th International Conference, DASFAA 2019, Chiang Mai, Thailand, April 2225, 2019, Proceedings, Part I 24. 367–382. DOI:
[192]
Hsiu-Wei Yang, Yanyan Zou, Peng Shi, Wei Lu, Jimmy Lin, and Xu Sun. 2019. Aligning cross-lingual entities with multi-aspect information. In Proceedings of the EMNLP-IJCNLP, 2019. 4430–4440. DOI:
[193]
Zequn Sun, Wei Hu, and Chengkai Li. 2017. Cross-lingual entity alignment via joint attribute-preserving embedding. In The Semantic WebISWC 2017: 16th International Semantic Web Conference, Vienna, Austria, October 2125, 2017, Proceedings, Part I 16. 628–644. DOI:
[194]
Bayu Distiawan Trisedya, Jianzhong Qi, and Rui Zhang. 2019. Entity alignment between knowledge graphs using attribute embeddings. In Proceedings of the AAAI Conference on Artificial Intelligence. 297–304. DOI:
[195]
Wang-Chiew Tan. 2020. Technical perspective: Entity matching with magellan. Communications of the ACM 63, 8 (2020), 82. DOI:
[196]
Michael Sejr Schlichtkrull and Héctor Martínez Alonso. 2016. MSejrKu at SemEval-2016 Task 14: Taxonomy enrichment by evidence ranking. In Proceedings of the 10th international workshop on semantic evaluation (SemEval-2016). 1337–1341. DOI:
[197]
Sidharth Mudgal, Han Li, Theodoros Rekatsinas, AnHai Doan, Youngchoon Park, Ganesh Krishnan, Rohit Deep, Esteban Arcaute, and Vijay Raghavendra. 2018. Deep learning for entity matching: A design space exploration. In Proceedings of the 2018 International Conference on Management of Data. 19–34. DOI:
[198]
David Jurgens and Mohammad Taher Pilehvar. 2016. SemEval-2016 task 14: Semantic taxonomy enrichment. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016). 1092–1102. DOI:
[199]
Max Berrendorf, Evgeniy Faerman, and Volker Tresp. 2021. Active learning for entity alignment. In Advances in Information Retrieval: 43rd European Conference on IR Research, ECIR 2021, Virtual Event, March 28April 1, 2021, Proceedings, Part I 43. 48–62. DOI:
[200]
Hao Zhu, Ruobing Xie, Zhiyuan Liu, and Maosong Sun. 2017. Iterative entity alignment via joint knowledge embeddings. In Proceedings of the IJCAI, 2017. 4258–4264. DOI:
[201]
Zequn Sun, Wei Hu, Qingheng Zhang, and Yuzhong Qu. 2018. Bootstrapping entity alignment with knowledge graph embedding. In Proceedings of the IJCAI, 2018. 4396–4402. DOI:
[202]
Qingheng Zhang, Zequn Sun, Wei Hu, Muhao Chen, Lingbing Guo, and Yuzhong Qu. 2019. Multi-view knowledge graph embedding for entity alignment. In Proceedings of the IJCAI, 2019. 5429–5435. DOI:
[203]
Muhao Chen, Yingtao Tian, Mohan Yang, and Carlo Zaniolo. 2017. Multilingual knowledge graph embeddings for cross-lingual knowledge alignment. In Proceedings of the IJCAI, 2017. 1511–1517. DOI:
[204]
Zijie Huang, Zheng Li, Haoming Jiang, Tianyu Cao, Hanqing Lu, Bing Yin, Karthik Subbian, Yizhou Sun, and Wei Wang. 2022. Multilingual knowledge graph completion with self-supervised adaptive graph alignment. In Proceedings of the ACL. DOI:
[205]
Xiao Liu, Haoyun Hong, Xinghao Wang, Zeyi Chen, Evgeny Kharlamov, Yuxiao Dong, and Jie Tang. 2022. SelfKG: Self-supervised entity alignment in knowledge graphs. In Proceedings of the ACM Web Conference 2022. 860–870. DOI:
[206]
Muhao Chen, Yingtao Tian, Kai-Wei Chang, Steven Skiena, and Carlo Zaniolo. 2018. Co-training embeddings of knowledge graphs and entity descriptions for cross-lingual entity alignment. In Proceedings of the IJCAI, 2018. 3998–4004. DOI:
[207]
Bo Chen, Jing Zhang, Xiaobin Tang, Hong Chen, and Cuiping Li. 2020. JarKA: Modeling attribute interactions for cross-lingual knowledge alignment. In Advances in Knowledge Discovery and Data Mining: 24th Pacific-Asia Conference, PAKDD 2020, Singapore, May 1114, 2020, Proceedings, Part I 24. 845–856. DOI:
[208]
Zhichun Wang, Qingsong Lv, Xiaohan Lan, and Yu Zhang. 2018. Cross-lingual knowledge graph alignment via graph convolutional networks. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. 349–357. DOI:
[209]
Yao Zhu, Hongzhi Liu, Zhonghai Wu, and Yingpeng Du. 2021. Relation-aware neighborhood matching model for entity alignment. In Proceedings of the AAAI Conference on Artificial Intelligence. 4749–4756. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/16606
[210]
Yuting Wu, Xiao Liu, Yansong Feng, Zheng Wang, Rui Yan, and Dongyan Zhao. 2019. Relation-aware entity alignment for heterogeneous knowledge graphs. In Proceedings of the IJCAI, 2019. 5278–5284. DOI:
[211]
Kun Xu, Liwei Wang, Mo Yu, Yansong Feng, Yan Song, Zhiguo Wang, and Dong Yu. 2019. Cross-lingual knowledge graph alignment via graph matching neural network. In Proceeding of the ACL, 2019, Volume 1: Long Papers. 3156–3161. DOI:
[212]
Zhiyuan Liu, Yixin Cao, Liangming Pan, Juanzi Li, and Tat-Seng Chua. 2020. Exploring and evaluating attributes, values, and structures for entity alignment. In Proceedings of the EMNLP, 2020. 6355–6364. DOI:
[213]
Yixin Cao, Zhiyuan Liu, Chengjiang Li, Zhiyuan Liu, Juanzi Li, and Tat-Seng Chua. 2019. Multi-channel graph neural network for entity alignment. In Proceedings of the ACL, 2019, Volume 1: Long Papers. 1452–1461. DOI:
[214]
Shixian Jiang, Tiezheng Nie, Derong Shen, Yue Kou, and Ge Yu. 2021. Entity alignment of knowledge graph by joint graph attention and translation representation. In Web Information Systems and Applications: 18th International Conference, WISA 2021, Kaifeng, China, September 2426, 2021, Proceedings 18. 347–358. DOI:
[215]
Tianwen Jiang, Tong Zhao, Bing Qin, Ting Liu, Nitesh V. Chawla, and Meng Jiang. 2019. The role of c̎ondition: A novel scientific knowledge graph representation and construction model. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 1634–1642. DOI:
[216]
Tingyue Zheng, Ziqiang Xu, Yufan Li, Yuan Zhao, Bin Wang, and Xiaochun Yang. 2021. A novel conditional knowledge graph representation and construction. In Proceedings of the CAAI International Conference on Artificial Intelligence. 383–394. DOI:
[217]
Fei Cheng and Yusuke Miyao. 2017. Classifying temporal relations by bidirectional LSTM over dependency paths. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). 1–6. DOI:
[218]
Yuanliang Meng, Anna Rumshisky, and Alexey Romanov. 2017. Temporal information extraction for question answering using syntactic dependencies in an LSTM-based architecture. In Proceedings of the EMNLP, 2017. 887–896. DOI:
[219]
Siddharth Vashishtha, Benjamin Van Durme, and Aaron Steven White. 2019. Fine-grained temporal relation extraction. In Proceedings of the ACL, 2019, Volume 1: Long Papers. 2906–2919. DOI:
[220]
Puneet Mathur, Rajiv Jain, Franck Dernoncourt, Vlad I. Morariu, Quan Hung Tran, and Dinesh Manocha. 2021. TIMERS: Document-level temporal relation extraction. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers). 524–533. DOI:
[221]
Julien Leblay and Melisachew Wudage Chekol. 2018. Deriving validity time in knowledge graph. In Companion Proceedings of the The Web Conference 2018. 1771–1776. DOI:
[222]
Shib Sankar Dasgupta, Swayambhu Nath Ray, and Partha P. Talukdar. 2018. HyTE: Hyperplane-based temporally aware knowledge graph embedding. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. 2001–2011. DOI:
[223]
Alberto García-Durán, Sebastijan Dumancic, and Mathias Niepert. 2018. Learning sequence encoders for temporal knowledge graph completion. In Proceedings of the EMNLP, 2018. 4816–4821. https://aclanthology.org/D18-1516/
[224]
Yu Liu, Wen Hua, Kexuan Xin, and Xiaofang Zhou. 2019. Context-aware temporal knowledge graph embedding. In Web Information Systems EngineeringWISE 2019: 20th International Conference, Hong Kong, China, November 2630, 2019, Proceedings 20. 583–598. DOI:
[225]
Lifan Lin and Kun She. 2020. Tensor decomposition-based temporal knowledge graph embedding. In Proceedings of the 2020 IEEE 32nd International Conference on Tools with Artificial Intelligence. 969–975. DOI:
[226]
Timothée Lacroix, Guillaume Obozinski, and Nicolas Usunier. 2020. Tensor decompositions for temporal knowledge base completion. In ICLR. Retrieved from https://openreview.net/forum?id=rke2P1BFwS
[227]
Pengpeng Shao, Dawei Zhang, Guohua Yang, Jianhua Tao, Feihu Che, and Tong Liu. 2022. Tucker decomposition-based temporal knowledge graph completion. Knowledge-Based Systems 238 (2022), 107841. DOI:
[228]
Wessel Radstok, Mel Chekol, and Yannis Velegrakis. 2021. Leveraging static models for link prediction in temporal knowledge graphs. In Proceedings of the 2021 IEEE 33rd International Conference on Tools with Artificial Intelligence. 1034–1041. DOI:
[229]
Jaehun Jung, Jinhong Jung, and U Kang. 2021. Learning to walk across time for interpretable temporal knowledge graph completion. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 786–795. DOI:
[230]
Hao Liu, Shu-wang Zhou, Changfang Chen, Tianlei Gao, Jiyong Xu, and Minglei Shu. 2022. Dynamic knowledge graph reasoning based on deep reinforcement learning. Knowledge-Based Systems 241 (2022), 108235. DOI:
[231]
Tingsong Jiang, Tianyu Liu, Tao Ge, Lei Sha, Sujian Li, Baobao Chang, and Zhifang Sui. 2016. Encoding temporal information for time-aware link prediction. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. 2350–2354. DOI:
[232]
Kui Yu, Xianjie Guo, Lin Liu, Jiuyong Li, Hao Wang, Zhaolong Ling, and Xindong Wu. 2020. Causality-based feature selection: Methods and evaluations. ACM Computing Surveys 53, 5 (2020), 111:1–111:36. DOI:
[233]
Rakshit Trivedi, Hanjun Dai, Yichen Wang, and Le Song. 2017. Know-evolve: Deep temporal reasoning for dynamic knowledge graphs. In Proceedings of the International Conference on Machine Learning. 3462–3471. Retrieved from http://proceedings.mlr.press/v70/trivedi17a.html
[234]
Woojeong Jin, Changlin Zhang, Pedro A. Szekely, and Xiang Ren. 2019. Recurrent event network for reasoning over temporal knowledge graphs. arXiv:1904.05530. Retrieved from http://arxiv.org/abs/1904.05530
[235]
Xuhui Li, Liuyan Liu, Xiaoguang Wang, Yiwen Li, Qingfeng Wu, and Tieyun Qian. 2021. Towards evolutionary knowledge representation under the big data circumstance. The Electronic Library 39, 3 (2021), 392–410. DOI:
[236]
Zhen Han, Peng Chen, Yunpu Ma, and Volker Tresp. 2020. DyERNIE: Dynamic evolution of riemannian manifold embeddings for temporal knowledge graph completion. In EMNLP, 2020. 7301–7316. DOI:
[237]
Tony Gracious, Shubham Gupta, Arun Kanthali, Rui M. Castro, and Ambedkar Dukkipati. 2021. Neural latent space model for dynamic networks and temporal knowledge graphs. In Proceedings of the AAAI Conference on Artificial Intelligence. 4054–4062. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/16526
[238]
Yuchen Yan, Lihui Liu, Yikun Ban, Baoyu Jing, and Hanghang Tong. 2021. Dynamic knowledge graph alignment. In Proceedings of the AAAI Conference on Artificial Intelligence. 4564–4572. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/16585
[239]
Kristinn Andersen, Helgi Thorbergsson, Saemundur E Thorsteinsson, and Karl S Gudmundsson. 2020. The small languages in the large world of technology. In Proceedings of the 2020 IEEE International Professional Communication Conference. IEEE, 30–33.
[240]
Tristan Miller, Christian Hempelmann, and Iryna Gurevych. 2017. SemEval-2017 Task 7: Detection and interpretation of english puns. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017). 58–68. DOI:
[241]
Cunxiang Wang, Shuailong Liang, Yili Jin, Yilong Wang, Xiaodan Zhu, and Yue Zhang. 2020. SemEval-2020 Task 4: Commonsense validation and explanation. In Proceedings of the SemEval@COLING, 2020. 307–321. DOI:
[242]
Hongbin Ye, Ningyu Zhang, Hui Chen, and Huajun Chen. 2022. Generative knowledge graph construction: A review. In EMNLP. 1–17.
[243]
Ye Liu, Hui Li, Alberto García-Durán, Mathias Niepert, Daniel Oñoro-Rubio, and David S. Rosenblum. 2019. MMKG: Multi-modal knowledge graphs. In The Semantic Web: 16th International Conference, ESWC 2019, Portoro, Slovenia, June 26, 2019, Proceedings 16. 459–474. DOI:
[244]
Shahi Dost, Luciano Serafini, Marco Rospocher, Lamberto Ballan, and Alessandro Sperduti. 2022. Aligning and linking entity mentions in image, text, and knowledge base. Data and Knowledge Engineering 138 (2022), 101975. DOI:
[245]
Qingyu Guo, Fuzhen Zhuang, Chuan Qin, Hengshu Zhu, Xing Xie, Hui Xiong, and Qing He. 2022. A survey on knowledge graph-based recommender systems. IEEE Trans. Knowl. Data Eng. 34, 8 (2022), 3549–3568. DOI:
[246]
Hongwei Wang, Fuzheng Zhang, Xing Xie, and Minyi Guo. 2018. DKN: Deep knowledge-aware network for news recommendation. In WWW. Pierre-Antoine Champin, Fabien Gandon, Mounia Lalmas, and Panagiotis G. Ipeirotis (Eds.), 1835–1844. DOI:
[247]
Cairong Yan, Shuai Liu, Yanting Zhang, Zijian Wang, and Pengwei Wang. 2021. A multi-task learning approach for recommendation based on knowledge graph. In International Joint Conference on Neural Networks, IJCNN 2021. IEEE, 1–8. DOI:
[248]
Jeff Z. Pan, Siyana Pavlova, Chenxi Li, Ningxi Li, Yangmei Li, and Jinshuo Liu. 2018. Content based fake news detection using knowledge graphs. In ISWC. 669–683. DOI:
[249]
Yaqian Dun, Kefei Tu, Chen Chen, Chunyan Hou, and Xiaojie Yuan. 2021. KAN: Knowledge-aware attention network for fake news detection. In AAAI. 81–89. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/16080
[250]
Shengsheng Qian, Jun Hu, Quan Fang, and Changsheng Xu. 2021. Knowledge-aware multi-modal adaptive graph convolutional networks for fake news detection. ACM Trans. Multim. Comput. Commun. Appl. 17, 3 (2021), 98:1–98:23. DOI:
[251]
Seungwhan Moon, Pararth Shah, Anuj Kumar, and Rajen Subba. 2019. OpenDialKG: Explainable conversational reasoning with attention-based walks over knowledge graphs. In ACL. 845–854. DOI:
[252]
Jinghui Qin, Zheng Ye, Jianheng Tang, and Xiaodan Liang. 2020. Dynamic knowledge routing network for target-guided open-domain conversation. In AAAI, IAAI, EAAI 2020. 8657–8664. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/6390
[253]
Bo Xu, Yong Xu, Jiaqing Liang, Chenhao Xie, Bin Liang, Wanyun Cui, and Yanghua Xiao. 2017. CN-DBpedia: A never-ending chinese knowledge extraction system. In IEA/AIE, 2017, Proceedings, Part II. 428–438. DOI:
[254]
Fabian M. Suchanek, Gjergji Kasneci, and Gerhard Weikum. 2007. Yago: A core of semantic knowledge. In Proc. WWW, 2007. 697–706. DOI:
[255]
Jindong Chen, Ao Wang, Jiangjie Chen, Yanghua Xiao, Zhendong Chu, Jingping Liu, Jiaqing Liang, and Wei Wang. 2019. CN-probase: A data-driven approach for large-scale chinese taxonomy construction. In Proc. ICDE, 2019. 1706–1709. DOI:
[256]
Roberto Navigli and Simone Paolo Ponzetto. 2010. BabelNet: Building a very large multilingual semantic network. In Proc. ACL, 2010. 216–225. Retrieved from https://aclanthology.org/P10-1023/
[257]
Hugo Liu and Push Singh. 2004. ConceptNeta practical commonsense reasoning tool-kit. BT Technology Journal 22, 4 (2004), 211–226.
[258]
Zhendong Dong and Qiang Dong. 2003. HowNet - a hybrid language and knowledge resource. In ICNLP, 2003, Proceedings. 820–824. DOI:
[259]
Shiyi Han, Yuhui Zhang, Yunshan Ma, Cunchao Tu, Zhipeng Guo, Zhiyuan Liu, and Maosong Sun. 2016. THUOCL: Tsinghua Open Chinese Lexicon Tsinghua University (2016). Retrieved from http://thuocl.thunlp.org/
[260]
Jie Tang, Duo Zhang, and Limin Yao. 2007. Social network extraction of academic researchers. In Proc. ICDM, 2007. 292–301. DOI:
[261]
Fanjin Zhang, Xiao Liu, Jie Tang, Yuxiao Dong, Peiran Yao, Jie Zhang, Xiaotao Gu, Yan Wang, Bin Shao, Rui Li, and Kuansan Wang. 2019. OAG: Toward linking large-scale heterogeneous entity graphs. In KDD, 2019. 2585–2595. DOI:
[262]
Huajun Chen, Ning Hu, Guilin Qi, Haofen Wang, Zhen Bi, Jie Li, and Fan Yang. 2021. OpenKG chain: A blockchain infrastructure for open knowledge graphs. Data Intell. 3, 2 (2021), 205–227. DOI:
[263]
Anthony Fader, Stephen Soderland, and Oren Etzioni. 2011. Identifying relations for open information extraction. In Proc. EMNLP.
[264]
Mausam, Michael Schmitz, Stephen Soderland, Robert Bart, and Oren Etzioni. 2012. Open language learning for information extraction. In Proc. EMNLP-CoNLL, ACL 2012. 523–534. Retrieved from https://aclanthology.org/D12-1048/
[265]
Steven Bird. 2006. NLTK: The natural language toolkit. In ACL, 2006. Nicoletta Calzolari, Claire Cardie, and Pierre Isabelle (Eds.). DOI:
[266]
Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Rose Finkel, Steven Bethard, and David McClosky. 2014. The stanford CoreNLP natural language processing toolkit. In Proc. ACL, 2014. 55–60. DOI:
[267]
Ziqi Zhang. 2017. Effective and efficient semantic table interpretation using TableMiner\({}^{\mbox{+}}\). Semantic Web 8, 6 (2017), 921–957. DOI:
[268]
Marco Cremaschi, Anisa Rula, Alessandra Siano, and Flavio De Paoli. 2019. MantisTable: A tool for creating semantic annotations on tabular data. In ESWC 2019 Satellite Events, 2019, Revised Selected Papers. 18–23. DOI:
[269]
Matthew Honnibal, Ines Montani, Sofie Van Landeghem, and Adriane Boyd. 2020. spaCy: Industrial-strength Natural Language Processing in Python. (2020). DOI:
[270]
Xu Han, Tianyu Gao, Yuan Yao, Deming Ye, Zhiyuan Liu, and Maosong Sun. 2019. OpenNRE: An open and extensible toolkit for neural relation extraction. In Proceedings of EMNLP-IJCNLP: System Demonstrations. 169–174. DOI:
[271]
Aditya Grover and Jure Leskovec. 2016. node2vec: Scalable feature learning for networks. In Proceedings of KDD. 855–864.
[272]
Jian Tang, Meng Qu, Mingzhe Wang, Ming Zhang, Jun Yan, and Qiaozhu Mei. 2015. Line: Large-scale information network embedding. In Proceedings of WWW. 1067–1077.
[273]
Wei Hu and Yuzhong Qu. 2008. Falcon-AO: A practical ontology matching system. J. Web Semant. 6, 3 (2008), 237–239. DOI:
[274]
Hao Peng, Haoran Li, Yangqiu Song, Vincent W. Zheng, and Jianxin Li. 2021. Differentially private federated knowledge graphs embedding. In CIKM, 2021. 1416–1425. DOI:
[275]
Mingyang Chen, Wen Zhang, Zonggang Yuan, Yantao Jia, and Huajun Chen. 2021. FedE: Embedding knowledge graphs in federated setting. In IJCKG, 2021. 80–88. DOI:
[276]
Cynthia Dwork. 2008. Differential privacy: A survey of results. In TAMC 2008. Manindra Agrawal, Ding-Zhu Du, Zhenhua Duan, and Angsheng Li (Eds.), Vol. 4978, 1–19. DOI:
[277]
NhatHai Phan, Minh N. Vu, Yang Liu, Ruoming Jin, Dejing Dou, Xintao Wu, and My T. Thai. 2019. Heterogeneous gaussian mechanism: Preserving differential privacy in deep learning with provable robustness. In IJCAI 2019. 4753–4759. DOI:
[278]
Ferdinando Fioretto, Cuong Tran, Pascal Van Hentenryck, and Keyu Zhu. 2022. Differential privacy and fairness in decisions and learning tasks: A survey. In Proceedings of the 31st International Joint Conference on Artificial Intelligence. ijcai.org, 5470–5477. DOI:
[279]
Mohammad Mahdi Khalili, Xueru Zhang, Mahed Abroshan, and Somayeh Sojoudi. 2021. Improving fairness and privacy in selection problems. In AAAI 2021, IAAI 2021, EAAI 2021. 8092–8100. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/16986
[280]
Aisling McGlinchey and Oliver Mason. 2021. Observations on the bias of nonnegative mechanisms for differential privacy. arXiv:2101.02957. Retrieved from https://arxiv.org/abs/2101.02957
[281]
Ken Mishima and Hayato Yamana. 2022. A survey on explainable fake news detection. IEICE Trans. Inf. Syst. 105-D, 7 (2022), 1249–1257.
[282]
Rubén Tolosana, Rubén Vera-Rodríguez, Julian Fiérrez, Aythami Morales, and Javier Ortega-Garcia. 2020. Deepfakes and beyond: A survey of face manipulation and fake detection. Inf. Fusion 64 (2020), 131–148.
[283]
Luciano Floridi and Massimo Chiriatti. 2020. GPT-3: Its nature, scope, limits, and consequences. Minds Mach. 30, 4 (2020), 681–694. DOI:
[284]
Eric Mitchell, Yoonho Lee, Alexander Khazatsky, Christopher D. Manning, and Chelsea Finn. 2023. DetectGPT: Zero-Shot Machine-Generated Text Detection using Probability Curvature. In ICML 2023 (Proceedings of Machine Learning Research, Vol. 202). PMLR, 24950–24962. Retrieved from https://proceedings.mlr.press/v202/mitchell23a.html
[285]
Liang Yao, Chengsheng Mao, and Yuan Luo. 2019. KG-BERT: BERT for knowledge graph completion. arXiv:1909.03193. Retrieved from https://arxiv.org/abs/1909.03193
[286]
Michihiro Yasunaga, Antoine Bosselut, Hongyu Ren, Xikun Zhang, Christopher D. Manning, Percy Liang, and Jure Leskovec. 2022. Deep bidirectional language-knowledge graph pretraining. In NeurIPS. Retrieved from http://papers.nips.cc/paper_files/paper/2022/hash/f224f056694bcfe465c5d84579785761-Abstract-Conference.html
[287]
Fabio Petroni, Tim Rocktäschel, Sebastian Riedel, Patrick S. H. Lewis, Anton Bakhtin, Yuxiang Wu, and Alexander H. Miller. 2019. Language models as knowledge bases?. In EMNLP-IJCNLP. 2463–2473. DOI:
[288]
Ye Liu, Yao Wan, Lifang He, Hao Peng, and Philip S. Yu. 2021. KG-BART: Knowledge graph-augmented BART for generative commonsense reasoning. In AAAI. 6418–6425. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/16796
[289]
Henry Weld, Xiaoqi Huang, Siqu Long, Josiah Poon, and Soyeon Caren Han. 2023. A Survey of Joint Intent Detection and Slot Filling Models in Natural Language Understanding. ACM Comput. Surv. 55, 8 (2023), 156:1–156:38. DOI:
[290]
Henry Weld, Xiaoqi Huang, Siqi Long, Josiah Poon, and Soyeon Caren Han. 2021. A survey of joint intent detection and slot-filling models in natural language understanding. arXiv:2101.08091. Retrieved from https://arxiv.org/abs/2101.08091
[291]
Nicholas Kushmerick. 2000. Wrapper induction: Efficiency and expressiveness. Artificial Intelligence 118, 1-2 (2000), 15–68.
[292]
David Buttler, Ling Liu, and Calton Pu. 2001. A fully automated object extraction system for the world wide web. In Proc. ICDCS, 2001. 361–370. DOI:
[293]
Beth M. Sundheim. 1996. The message understanding conferences. In TIPSTER TEXT PROGRAM PHASE II: Proceedings of a Workshop held at Vienna, 1996. 35–37. DOI:
[294]
S. Thenmalar, Balaji Jagan, and T. V. Geetha. 2015. Semi-supervised bootstrapping approach for named entity recognition. arXiv:1511.06833. Retrieved from https://arxiv.org/abs/1511.06833
[295]
Sergey Brin. 1998. Extracting patterns and relations from the world wide web. In The World Wide Web and Databases, International Workshop WebDB’98, 1998, Selected Papers. 172–183. DOI:
[296]
Eugene Agichtein and Luis Gravano. 2000. Snowball: Extracting Relations from Large Plain-text Collections. In Proc. ACM, 2000. 85–94. DOI:
[297]
Meng Jiang, Jingbo Shang, Taylor Cassidy, Xiang Ren, Lance M. Kaplan, Timothy P. Hanratty, and Jiawei Han. 2017. MetaPAD: Meta pattern discovery from massive text corpora. In KDD, 2017. 877–886. DOI:
[298]
Yanif Ahmad, Tudor Antoniu, Sharon Goldwater, and Shriram Krishnamurthi. 2003. A type system for statically detecting spreadsheet errors. In ASE, 2003. 174–183. DOI:
[299]
Yoones A. Sekhavat, Francesco Di Paolo, Denilson Barbosa, and Paolo Merialdo. 2014. Knowledge base augmentation using tabular data. In Proc. WWW, 2014 (CEUR Workshop Proceedings). Retrieved from http://ceur-ws.org/Vol-1184/ldow2014_paper_02.pdf
[300]
Emir Muñoz, Aidan Hogan, and Alessandra Mileo. 2014. Using linked data to mine RDF from wikipedia’s tables. In WSDM, 2014. 533–542. DOI:
[301]
Sebastian Krause, Hong Li, Hans Uszkoreit, and Feiyu Xu. 2012. Large-scale learning of relation-extraction rules with distant supervision from the web. In ISWC, 2012, Proceedings, Part I, Vol. 7649. 263–278. DOI:
[302]
Yankai Lin, Zhiyuan Liu, Maosong Sun, Yang Liu, and Xuan Zhu. 2015. Learning entity and relation embeddings for knowledge graph completion. In Proc. AAAI-15, 2015. 2181–2187. Retrieved from http://www.aaai.org/ocs/index.php/AAAI/AAAI15/paper/view/9571
[303]
Zhen Wang, Jianwen Zhang, Jianlin Feng, and Zheng Chen. 2014. Knowledge graph embedding by translating on hyperplanes. In Proc. AAAI-14, 2014. 1112–1119. Retrieved from http://www.aaai.org/ocs/index.php/AAAI/AAAI14/paper/view/8531
[304]
Maximilian Nickel, Volker Tresp, and Hans-Peter Kriegel. 2011. A three-way model for collective learning on multi-relational data. In Proc. ICML, 2011. 809–816. Retrieved from https://icml.cc/2011/papers/438_icmlpaper.pdf
[305]
Bishan Yang, Wen-tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. 2015. Embedding entities and relations for learning and inference in knowledge bases. In ICLR, 2015, Conference Track Proceedings. Retrieved from http://arxiv.org/abs/1412.6575
[306]
Tim Dettmers, Pasquale Minervini, Pontus Stenetorp, and Sebastian Riedel. 2018. Convolutional 2D knowledge graph embeddings. In AAAI, IAAI, EAAI, 2018. 1811–1818. Retrieved from https://www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/view/17366
[307]
Zhaoli Zhang, Zhifei Li, Hai Liu, and Neal N. Xiong. 2022. Multi-scale dynamic convolutional network for knowledge graph embedding. IEEE Trans. Knowl. Data Eng. 34, 5 (2022), 2335–2347. DOI:
[308]
Jun Yuan, Neng Gao, and Ji Xiang. 2019. TransGate: Knowledge graph embedding with shared gate structure. In AAAI, IAAI, EAAI, 2019. 3100–3107. DOI:
[309]
Yukun Zuo, Quan Fang, Shengsheng Qian, Xiaorui Zhang, and Changsheng Xu. 2018. Representation learning of knowledge graphs with entity attributes and multimedia descriptions. In BigMM, 2018. 1–5. DOI:
[310]
Baoxu Shi and Tim Weninger. 2018. Open-world knowledge graph completion. In AAAI, IAAI, EAAI, 2018. 1957–1964. Retrieved from https://www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/view/16055
[311]
Michael Sejr Schlichtkrull, Thomas N. Kipf, Peter Bloem, Rianne van den Berg, Ivan Titov, and Max Welling. 2018. Modeling relational data with graph convolutional networks. In ESWC, 2018, Proceedings. 593–607. DOI:
[312]
Chao Shang, Yun Tang, Jing Huang, Jinbo Bi, Xiaodong He, and Bowen Zhou. 2019. End-to-end structure-aware convolutional networks for knowledge base completion. In AAAI, IAAI, EAAI, 2019. 3060–3067. DOI:
[313]
Shikhar Vashishth, Soumya Sanyal, Vikram Nitin, and Partha P. Talukdar. 2020. Composition-based multi-relational graph convolutional networks. In ICLR. Retrieved from https://openreview.net/forum?id=BylA_C4tPr
[314]
Baoxu Shi and Tim Weninger. 2017. ProjE: Embedding projection for knowledge graph completion. In Proc. AAAI-17, 2017. 1236–1242. Retrieved from http://aaai.org/ocs/index.php/AAAI/AAAI17/paper/view/14279
[315]
Saiping Guan, Xiaolong Jin, Yuanzhuo Wang, and Xueqi Cheng. 2018. Shared embedding based neural networks for knowledge graph completion. In CIKM, 2018. 247–256. DOI:
[316]
Junheng Hao, Muhao Chen, Wenchao Yu, Yizhou Sun, and Wei Wang. 2019. Universal representation learning of knowledge bases by jointly embedding instances and ontological concepts. In KDD, 2019. 1709–1719. DOI:
[317]
Yu Zhao, Anxiang Zhang, Ruobing Xie, Kang Liu, and Xiaojie Wang. 2020. Connecting embeddings for knowledge graph entity typing. In Proc. ACL, 2020. 6419–6428. DOI:
[318]
Hailong Jin, Lei Hou, and Juanzi Li. 2018. Type hierarchy enhanced heterogeneous network embedding for fine-grained entity typing in knowledge bases. In NLP-NABD, 2018, Proceedings. 170–182. DOI:
[319]
Hailong Jin, Lei Hou, Juanzi Li, and Tiansi Dong. 2018. Attributed and predictive entity embedding for fine-grained entity typing in knowledge bases. In Proc. COLING, 2018. 282–292. Retrieved from https://aclanthology.org/C18-1024/
[320]
Yixin Cao, Lei Hou, Juanzi Li, Zhiyuan Liu, Chengjiang Li, Xu Chen, and Tiansi Dong. 2018. Joint representation learning of cross-lingual words and entities via attentive distant supervision. In Proc. EMNLP, 2018. 227–237. DOI:
[321]
Wei Shen, Jianyong Wang, Ping Luo, and Min Wang. 2012. LIEGE: : Link entities in web lists with knowledge base. In KDD, 2012. 1424–1432. DOI:
[322]
Amit Bagga and Breck Baldwin. 1998. Entity-based cross-document coreferencing using the vector space model. In COLING-ACL, 1998, Proceedings of the Conference. 79–85. DOI:
[323]
Ivo Lasek and Peter Vojtás. 2012. Various approaches to text representation for named entity disambiguation. In IIWAS, 2012. 256–262. DOI:
[324]
Hongzhao Huang, Larry P. Heck, and Heng Ji. 2015. Leveraging deep neural networks and knowledge graphs for entity disambiguation. arXiv:1504.07678. Retrieved from https://arxiv.org/abs/1504.07678
[325]
Wei Fang, Jianwen Zhang, Dilin Wang, Zheng Chen, and Ming Li. 2016. Entity disambiguation by knowledge and text jointly embedding. In SIGNLL, CoNLL, ACL, 2016. 260–269. DOI:
[326]
Greg Durrett and Dan Klein. 2013. Easy victories and uphill battles in coreference resolution. In Proc. EMNLP, 2013, A meeting of SIGDAT, a Special Interest Group of the ACL. 1971–1982. Retrieved from https://aclanthology.org/D13-1203/
[327]
Sebastian Martschat and Michael Strube. 2015. Latent structures for coreference resolution. Trans. Assoc. Comput. Linguistics 3 (2015), 405–418. Retrieved from https://tacl2013.cs.columbia.edu/ojs/index.php/tacl/article/view/604
[328]
Kaushik Chakrabarti, Surajit Chaudhuri, Tao Cheng, and Dong Xin. 2012. A framework for robust discovery of entity synonyms. In KDD, 2012. 1384–1392. DOI:
[329]
Xiang Ren, Zeqiu Wu, Wenqi He, Meng Qu, Clare R. Voss, Heng Ji, Tarek F. Abdelzaher, and Jiawei Han. 2017. CoType: Joint extraction of typed entities and relations with knowledge bases. In Proc. WWW, 2017. 1015–1024. DOI:
[330]
Antoine Bordes, Nicolas Usunier, Alberto García-Durán, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multi-relational data. In NeurIPS, Proceedings, 2013. 2787–2795. Retrieved from https://proceedings.neurips.cc/paper/2013/hash/1cecc7a77928ca8133fa24680a88d2f9-Abstract.html
[331]
Christian Meilicke, Manuel Fink, Yanjie Wang, Daniel Ruffinelli, Rainer Gemulla, and Heiner Stuckenschmidt. 2018. Fine-grained evaluation of rule- and embedding-based systems for knowledge graph completion. In ISWC, 2018, Proceedings, Part I. 3–20. DOI:
[332]
Shu Guo, Quan Wang, Lihong Wang, Bin Wang, and Li Guo. 2016. Jointly embedding knowledge graphs and logical rules. In Proc. EMNLP, 2016. 192–202. DOI:
[333]
Shu Guo, Quan Wang, Lihong Wang, Bin Wang, and Li Guo. 2018. Knowledge graph embedding with iterative guidance from soft rules. In AAAI, IAAI, EAAI, 2018. 4816–4823. Retrieved from https://www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/view/16369
[334]
Sinno Jialin Pan, Zhiqiang Toh, and Jian Su. 2013. Transfer joint embedding for cross-domain named entity recognition. ACM Trans. Inf. Syst. 31, 2 (2013), 7. DOI:
[335]
Bill Yuchen Lin and Wei Lu. 2018. Neural adaptation layers for cross-domain named entity recognition. In Proc. EMNLP, 2018. 2012–2022. DOI:
[336]
Karthik Narasimhan, Adam Yala, and Regina Barzilay. 2016. Improving information extraction by acquiring external evidence with reinforcement learning. In Proc. EMNLP, 2016. 2355–2365. DOI:
[337]
Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare, Alex Graves, Martin A. Riedmiller, Andreas Fidjeland, Georg Ostrovski, Stig Petersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wierstra, Shane Legg, and Demis Hassabis. 2015. Human-level control through deep reinforcement learning. Nat. 518, 7540 (2015), 529–533. DOI:
[338]
YaoSheng Yang, Wenliang Chen, Zhenghua Li, Zhengqiu He, and Min Zhang. 2018. Distantly supervised NER with partial annotation learning and reinforcement learning. In Proc. COLING, 2018. 2159–2169. Retrieved from https://aclanthology.org/C18-1183/
[339]
Joey Tianyi Zhou, Hao Zhang, Di Jin, Hongyuan Zhu, Meng Fang, Rick Siow Mong Goh, and Kenneth Kwok. 2019. Dual adversarial neural transfer for low-resource named entity recognition. In Proc. ACL, 2019, Volume 1: Long Papers. 3461–3471. DOI:
[340]
Pengfei Cao, Yubo Chen, Kang Liu, Jun Zhao, and Shengping Liu. 2018. Adversarial transfer learning for chinese named entity recognition with self-attention mechanism. In Proc. EMNLP, 2018. 182–192. DOI:
[341]
Jing Li, Deheng Ye, and Shuo Shang. 2019. Adversarial transfer for named entity boundary detection with pointer networks. In Proc. IJCAI, 2019. 5053–5059. DOI:
[342]
Yanyao Shen, Hyokun Yun, Zachary C. Lipton, Yakov Kronrod, and Animashree Anandkumar. 2018. Deep active learning for named entity recognition. In ICLR, 2018, Conference Track Proceedings. Retrieved from https://openreview.net/forum?id=ry018WZAZ
[343]
Yukun Ma, Erik Cambria, and Sa Gao. 2016. Label embedding for zero-shot fine-grained named entity typing. In COLING, 2016, Proceedings of the Conference: Technical Papers. 171–180. Retrieved from https://aclanthology.org/C16-1017/
[344]
Tao Zhang, Congying Xia, Chun-Ta Lu, and Philip S. Yu. 2020. MZET: Memory augmented zero-shot fine-grained named entity typing. In Proc. COLING, 2020. 77–87. DOI:
[345]
Yuhang Guo, Wanxiang Che, Ting Liu, and Sheng Li. 2011. A graph-based method for entity linking. In IJCNLP, 2011. 1010–1018. Retrieved from https://aclanthology.org/I11-1113/
[346]
Xianpei Han, Le Sun, and Jun Zhao. 2011. Collective entity linking in web text: A graph-based method. In SIGIR. 765–774. DOI:
[347]
Avirup Sil and Alexander Yates. 2013. Re-ranking for joint named-entity recognition and linking. In CIKM, 2013. 2369–2374. DOI:
[348]
Xiaohua Liu, Ming Zhou, Xiangyang Zhou, Zhongyang Fu, and Furu Wei. 2012. Joint inference of named entity recognition and normalization for tweets. In Proc. ACL,2012, Volume 1: Long Papers. 526–535. Retrieved from https://aclanthology.org/P12-1055/
[349]
Minh C. Phan, Aixin Sun, Yi Tay, Jialong Han, and Chenliang Li. 2019. Pair-linking for collective entity disambiguation: Two could be better than all. IEEE Trans. Knowl. Data Eng. 31, 7 (2019), 1383–1396. DOI:
[350]
Wee Meng Soon, Hwee Tou Ng, and Chung Yong Lim. 2001. A machine learning approach to coreference resolution of noun phrases. Comput. Linguistics 27, 4 (2001), 521–544. DOI:
[351]
Marta Recasens, Marie-Catherine de Marneffe, and Christopher Potts. 2013. The life and death of discourse entities: Identifying singleton mentions. In NAACL-HLT, Proceedings, 2013. 627–633. Retrieved from https://aclanthology.org/N13-1071/
[352]
Md. Altaf ur Rahman and Vincent Ng. 2009. Supervised models for coreference resolution. In Proc. EMNLP, 2009, A meeting of SIGDAT, a Special Interest Group of the ACL. 968–977. Retrieved from https://aclanthology.org/D09-1101/
[353]
Veselin Stoyanov and Jason Eisner. 2012. Easy-first coreference resolution. In Proc. COLING, 2012. 2519–2534. Retrieved from https://aclanthology.org/C12-1154/
[354]
Dan Roth and Wen-tau Yih. 2004. A linear programming formulation for global inference in natural language tasks. In HLT-NAACL, 2004. 1–8. Retrieved from https://aclanthology.org/W04-2401/
[355]
Qi Li and Heng Ji. 2014. Incremental joint extraction of entity mentions and relations. In ACL, 2014, Volume 1: Long Papers. 402–412. DOI:
[356]
Sachin Pawar, Pushpak Bhattacharyya, and Girish Keshav Palshikar. 2017. End-to-end relation extraction using neural networks and markov logic networks. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics, EACL 2017.Mirella Lapata, Phil Blunsom, and Alexander Koller (Eds.), Association for Computational Linguistics, 818–827. DOI:
[357]
Omer Levy, Minjoon Seo, Eunsol Choi, and Luke Zettlemoyer. 2017. Zero-shot relation extraction via reading comprehension. In Proc. CoNLL, ACL, 2017. 333–342. DOI:
[358]
Livio Baldini Soares, Nicholas FitzGerald, Jeffrey Ling, and Tom Kwiatkowski. 2019. Matching the blanks: Distributional similarity for relation learning. In Proc. ACL, 2019, Volume 1: Long Papers. 2895–2905. DOI:
[359]
Victor Garcia Satorras and Joan Bruna Estrach. 2018. Few-shot learning with graph neural networks. In ICLR, 2018, Conference Track Proceedings. Retrieved from https://openreview.net/forum?id=BJj6qGbRW
[360]
Meng Qu, Tianyu Gao, Louis-Pascal A. C. Xhonneux, and Jian Tang. 2020. Few-shot relation extraction via bayesian meta-learning on relation graphs. In Proc. ICML, 2020. 7867–7876. Retrieved from http://proceedings.mlr.press/v119/qu20a.html
[361]
Ni Lao and William W. Cohen. 2010. Relational retrieval using a combination of path-constrained random walks. Mach. Learn. 81, 1 (2010), 53–67. DOI:
[362]
Matt Gardner, Partha Pratim Talukdar, Jayant Krishnamurthy, and Tom M. Mitchell. 2014. Incorporating vector space similarity in random walk inference over knowledge bases. In Proc. EMNLP, 2014, A meeting of SIGDAT, a Special Interest Group of the ACL. 397–406. DOI:
[363]
Quan Wang, Jing Liu, Yuanfei Luo, Bin Wang, and Chin-Yew Lin. 2016. Knowledge base completion via coupled path ranking. In Proc. ACL, 2016, Volume 1: Long Papers. DOI:
[364]
Bridget T. McInnes. 2016. VCU at semeval-2016 task 14: Evaluating definitional-based similarity measure for semantic taxonomy enrichment. In Proc. SemEval@NAACL-HLT, 2016. 1351–1355. DOI:
[365]
Luis Espinosa Anke, Francesco Ronzano, and Horacio Saggion. 2016. TALN at SemEval-2016 task 14: Semantic taxonomy enrichment via sense-based embeddings. In Proc. SemEval@NAACL-HLT, 2016. 1332–1336. DOI:
[366]
Karen Spärck Jones. 2004. A statistical interpretation of term specificity and its application in retrieval. J. Documentation 60, 5 (2004), 493–502. DOI:
[367]
Giorgos Stoilos, Giorgos B. Stamou, and Stefanos D. Kollias. 2005. A string metric for ontology alignment. In ISWC, 2005, Proceedings. 624–637. DOI:
[368]
Nikhita Vedula, Patrick K. Nicholson, Deepak Ajwani, Sourav Dutta, Alessandra Sala, and Srinivasan Parthasarathy. 2018. Enriching taxonomies with functional domain knowledge. In SIGIR, 2018. 745–754. DOI:
[369]
Dmitry Frolov, Susana Nascimento, Trevor I. Fenner, and Boris G. Mirkin. 2019. Using taxonomy tree to generalize a fuzzy thematic cluster. In FUZZ-IEEE, 2019. 1–6. DOI:
[370]
François Scharffe, Yanbin Liu, and Chuguang Zhou. 2009. Rdf-ai: An architecture for rdf datasets matching, fusion and interlink. In Proc. IJCAI workshop on IR-KR, 2019. 23.
[371]
Axel-Cyrille Ngonga Ngomo and Sören Auer. 2011. LIMES - a time-efficient approach for large-scale link discovery on the web of data. In Proc. IJCAI, 2011. 2312–2317. DOI:
[372]
Maria Pershina, Mohamed Yakout, and Kaushik Chakrabarti. 2015. Holistic entity matching across knowledge graphs. In IEEE BigData, 2015. 1585–1590. DOI:
[373]
Chia-Hui Chang, Chun-Nan Hsu, and Shao-Chen Lui. 2003. Automatic information extraction from semi-structured web pages by pattern discovery. Decis. Support Syst. 35, 1 (2003), 129–147. DOI:
[374]
Chun-Nan Hsu and Ming-Tzung Dung. 1998. Generating finite-state transducers for semi-structured data extraction from the web. Inf. Syst. 23, 8 (1998), 521–538. DOI:
[375]
Paulo Braz Golgher, Altigran Soares da Silva, Alberto H. F. Laender, and Berthier A. Ribeiro-Neto. 2001. Bootstrapping for example-based data extraction. In Proc. CIKM, 2001. 371–378. DOI:
[376]
Andrew Carlson and Charles Schafer. 2008. Bootstrapping information extraction from semi-structured web pages. In ECML/PKDD, 2008, Proceedings, Part I. 195–210. DOI:
[377]
Brad Adelberg. 1998. NoDoSE - A tool for semi-automatically extracting semi-structured data from text documents. In SIGMOD, 1998. Laura M. Haas and Ashutosh Tiwary (Eds.), 283–294. DOI:
[378]
Alberto H. F. Laender, Berthier A. Ribeiro-Neto, and Altigran Soares da Silva. 2002. DEByE - data extraction by example. Data Knowl. Eng. 40, 2 (2002), 121–154. DOI:
[379]
Aidan Finn, Nicholas Kushmerick, and Barry Smyth. 2001. Fact or fiction: Content classification for digital libraries. In DELOS, 2001 (ERCIM Workshop Proceedings). Retrieved from http://www.ercim.org/publication/ws-proceedings/DelNoe02/AidanFinn.pdf
[380]
Tim Weninger, William H. Hsu, and Jiawei Han. 2010. CETR: Content extraction via tag ratios. In Proc. WWW, 2010. 971–980. DOI:
[381]
Fei Sun, Dandan Song, and Lejian Liao. 2011. DOM based content extraction via text density. In SIGIR, 2011. 245–254. DOI:
[382]
Gong-Qing Wu, Li Li, Xuegang Hu, and Xindong Wu. 2013. Web news extraction via path ratios. In CIKM, 2013. 2059–2068. DOI:
[383]
Deng Cai, Shipeng Yu, Ji-Rong Wen, and Wei-Ying Ma. 2003. Extracting content structure for web pages based on visual representation. In APWeb, 2003, Proceedings. 406–417. DOI:
[384]
Yalin Wang and Jianying Hu. 2002. A machine learning based approach for table detection on the web. In WWW, 2002. 242–250. DOI:
[385]
Michael J. Cafarella, Alon Y. Halevy, Yang Zhang, Daisy Zhe Wang, and Eugene Wu. 2008. Uncovering the relational web. In 11th International Workshop on the Web and Databases, WebDB. Retrieved from http://webdb2008.como.polimi.it/images/stories/WebDB2008/paper30.pdf
[386]
Julian Eberius, Katrin Braunschweig, Markus Hentsch, Maik Thiele, Ahmad Ahmadov, and Wolfgang Lehner. 2015. Building the dresden web table corpus: A classification approach. In BDC, 2015. 41–50. DOI:
[387]
Michael J. Cafarella, Alon Y. Halevy, and Nodira Khoussainova. 2009. Data integration for the relational web. Proc. VLDB Endow. 2, 1 (2009), 1090–1101. DOI:
[388]
Shuo Zhang and Krisztian Balog. 2020. Web table extraction, retrieval, and augmentation: A survey. ACM Trans. Intell. Syst. Technol. 11, 2 (2020), 13:1–13:35. DOI:
[389]
Bruce Momjian. 2001. PostgreSQL: Introduction and Concepts. Vol. 192. Addison-Wesley New York.
[390]
Bin Shao, Haixun Wang, and Yatao Li. 2013. Trinity: A distributed graph engine on a memory cloud. In SIGMOD, 2013. 505–516. DOI:
[391]
J. Chris Anderson, Jan Lehnardt, and Noah Slater. 2010. CouchDB - The Definitive Guide: Time to Relax. O’Reilly.
[392]
Lei Zou, Jinghui Mo, Lei Chen, M. Tamer Özsu, and Dongyan Zhao. 2011. gStore: Answering SPARQL queries via subgraph matching. Proc. VLDB Endow. 4, 8 (2011), 482–493. DOI:
[393]
Jim Webber. 2012. A programmatic introduction to Neo4j. In SPLASH, Proceedings, 2012. 217–218. DOI:

Cited By

View all
  • (2025)Edge-featured multi-hop attention graph neural network for intrusion detection systemComputers & Security10.1016/j.cose.2024.104132148(104132)Online publication date: Jan-2025
  • (2024)Knowledge Graph Extraction of Business Interactions from News Text for Business Networking AnalysisMachine Learning and Knowledge Extraction10.3390/make60100076:1(126-142)Online publication date: 7-Jan-2024
  • (2024)The Issues of Creation of Machine-Understandable Smart Standards Based on Knowledge GraphsВопросы создания машинопонимаемых SMART-стандартов на основе графов знанийInformatics and AutomationИнформатика и автоматизация10.15622/ia.23.4.223:4(969-988)Online publication date: 26-Jun-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Computing Surveys
ACM Computing Surveys  Volume 56, Issue 4
April 2024
1026 pages
EISSN:1557-7341
DOI:10.1145/3613581
Issue’s Table of Contents

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 30 November 2023
Online AM: 05 September 2023
Accepted: 21 August 2023
Revised: 29 May 2023
Received: 30 June 2022
Published in CSUR Volume 56, Issue 4

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Knowledge graph
  2. deep learning
  3. information extraction
  4. knowledge graph completion
  5. knowledge fusion
  6. logic reasoning

Qualifiers

  • Survey

B.2.1 Data Preprocessing.

Many web crawlers support data preprocessing tasks that extract informative structures or contents. Besides WebCollector [47], Web Scraper is a user-friendly manual extraction tool for collecting multiple web pages, which provides a user interface to reserve focused web structures and cloud server for massive content extraction.

E Semi-strctured Data Pre-processing

Real-world raw data sources include multi-structure contents with irrelevant parts impairing the effect of knowledge extraction. Data preprocessing is necessary for handling a messy data environment. Preprocessing sub-tasks mainly include Content Extraction and Structure Interpretation.
E.1 Content Extraction
Many web pages contain non-content noises such as advertisements. Content extraction tasks aim to erase these irrelevant elements while reserving knowledge content. Users can manually select the main part of a web page (e.g., contents enveloped by “<table>”) to achieve this goal by web crawlers such as JSoup, BeautifulSoup, and Web Scraper that retrieve and interpret elements in Document Object Model (DOM) structures, then users can select the main part of a web page. However, when the data volume is high, manual work will fail to handle them. Mainstream automatic content extraction methods mainly include wrapper-based methods and statistic-based methods.
Wrapper-based methods are the earliest attempts to detect main contents, leveraging matching rules to capture informative content. Off-the-shelf wrapper tools automatically generate rules from semi-structured pages including IEPAD [373] and SoftMealy [374]. Bootstrapping methods iteratively enhance extraction templates with seed examples, such as [375] and [376]. Some toolkits provide user interfaces to optimize extraction templates, such as NoDoSE[377] and DEByE [378]. Template-based wrappers are easy to understand and achieve feasible results where the page structures are well-formed, but fail to grasp the inner contents covered by intricate novel elements or structures.
Users can also utilize methods based on statistical features of web pages to obtain informative content. Finn et.al [379] propose an empirical assumption that an informative sub-sequence in a web page contains sufficient enough words with minimal tags. Many models consider statistical features of web contents for extracting informative content, such as CETR [380] (the ratio of text length to tag number), CETD [381] (text density in each sub-tree structure of a DOM tree), and CEPR [382] (path ratio of Web links). Users can utilize WebCollector [47] that integrates the statistic-based models for content extraction. Another heuristic research direction is visual-features-based methods. For example, VIPS [383] utilizes the visual appearances (such as fonts and color types) of a page to build a content structure tree for content extraction.
When content extraction has been performed on a semi-structured page, users will acquire a renewed noise-free semi-structured or unstructured document.
E.2 Structure Interpretation
Many table forms in the web pages function as navigators or style-formatted containers for contents (handled by content extractors), comprising no relational structures. Models shall filter these decorative non-relational web table structures before obtaining relational information.
Relational table interpretation is a binary classification task that determines whether a table is informative. Methods analyze semantic features of table structures for classification. Wang and Hu [384] design a table classifier integrated with support vector machines (SVM) and decision trees based on the layout and content type features. Similarly, WebTables [385] develops a rule-based classifier based on the table size (number of rows and columns) and tags. Eberius et al. [386] develop a classification system DWTC via the feature of the table matrixes. Many web tables also contain data noises. OCTOPUS [387] further incorporates data cleansing with table classification tasks to filter informative tables.
Developing a table interpretation model includes two steps: first select features in the table forms, then integrate learning models to analyze relational semantics in the data. We recommend that readers refer to [388] for more table syntax features and high-performance model ensembles.

Funding Sources

  • Australian Research Council (ARC)
  • National Key R&D Program of China
  • National Natural Science Foundation of China (NSFC)

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)3,841
  • Downloads (Last 6 weeks)678
Reflects downloads up to 09 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2025)Edge-featured multi-hop attention graph neural network for intrusion detection systemComputers & Security10.1016/j.cose.2024.104132148(104132)Online publication date: Jan-2025
  • (2024)Knowledge Graph Extraction of Business Interactions from News Text for Business Networking AnalysisMachine Learning and Knowledge Extraction10.3390/make60100076:1(126-142)Online publication date: 7-Jan-2024
  • (2024)The Issues of Creation of Machine-Understandable Smart Standards Based on Knowledge GraphsВопросы создания машинопонимаемых SMART-стандартов на основе графов знанийInformatics and AutomationИнформатика и автоматизация10.15622/ia.23.4.223:4(969-988)Online publication date: 26-Jun-2024
  • (2024)The Promise and Challenge of Large Language Models for Knowledge Engineering: Insights from a HackathonExtended Abstracts of the CHI Conference on Human Factors in Computing Systems10.1145/3613905.3650844(1-9)Online publication date: 11-May-2024
  • (2024)Complex Graph Analysis and Representation Learning: Problems, Techniques, and ApplicationsIEEE Transactions on Network Science and Engineering10.1109/TNSE.2024.341785011:5(4990-5007)Online publication date: Sep-2024
  • (2024)Unifying Large Language Models and Knowledge Graphs: A RoadmapIEEE Transactions on Knowledge and Data Engineering10.1109/TKDE.2024.335210036:7(3580-3599)Online publication date: Jul-2024
  • (2024)Hierarchical Self-Learning Knowledge Inference Based on Markov Random Field for Semantic Segmentation of Remote Sensing ImagesIEEE Transactions on Geoscience and Remote Sensing10.1109/TGRS.2024.346343362(1-18)Online publication date: 2024
  • (2024)Knowledge Graph Enhanced Heterogeneous Graph Neural Network for Fake News DetectionIEEE Transactions on Consumer Electronics10.1109/TCE.2023.332466170:1(2826-2837)Online publication date: Feb-2024
  • (2024)Knowledge Graph Construction Techniques for Military Intelligence2024 5th International Conference on Information Science, Parallel and Distributed Systems (ISPDS)10.1109/ISPDS62779.2024.10667604(1-7)Online publication date: 31-May-2024
  • (2024)The Open Story Model (OSM): Transforming Big Data into Interactive Narratives2024 IEEE International Conference on Web Services (ICWS)10.1109/ICWS62655.2024.00141(1177-1187)Online publication date: 7-Jul-2024
  • Show More Cited By

View Options

Get Access

Login options

Full Access

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Full Text

View this article in Full Text.

Full Text

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media