TCKGE: Transformers with contrastive learning for knowledge graph embedding
Abstract Representation learning of knowledge graphs has emerged as a powerful technique for various downstream tasks. In recent years, numerous research efforts have been made for knowledge graphs embedding. However, previous approaches usually have difficulty dealing with complex multi-relational knowledge graphs due to their shallow network architecture. In this paper, we propose a novel framework named Transformers with Contrastive learning for Knowledge Graph Embedding (TCKGE), which aims to learn complex semantics in multi-relational knowledge graphs with deep architectures. To effectively capture the rich semantics of knowledge graphs, our framework leverages the powerful Transformers to build a deep hierarchical architecture to dynamically learn the embeddings of entities and relations. To obtain more robust knowledge embeddings with our deep architecture, we design a contrastive learning scheme to facilitate optimization by exploring the effectiveness of several different data augmentation strategies. The experimental results on two benchmark datasets show the superior of TCKGE over state-of-the-art models..
Medienart: |
Artikel |
---|
Erscheinungsjahr: |
2022 |
---|---|
Erschienen: |
2022 |
Enthalten in: |
Zur Gesamtaufnahme - volume:11 |
---|---|
Enthalten in: |
International journal of multimedia information retrieval - 11(2022), 4 vom: 27. Nov., Seite 589-597 |
Sprache: |
Englisch |
---|
Beteiligte Personen: |
Zhang, Xiaowei [VerfasserIn] |
---|
Links: |
Volltext [lizenzpflichtig] |
---|
BKL: | |
---|---|
Themen: |
Anmerkungen: |
© The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature 2022. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law. |
---|
doi: |
10.1007/s13735-022-00256-3 |
---|
funding: |
|
---|---|
Förderinstitution / Projekttitel: |
|
PPN (Katalog-ID): |
OLC2080172492 |
---|
LEADER | 01000caa a22002652 4500 | ||
---|---|---|---|
001 | OLC2080172492 | ||
003 | DE-627 | ||
005 | 20240405160100.0 | ||
007 | tu | ||
008 | 230131s2022 xx ||||| 00| ||eng c | ||
024 | 7 | |a 10.1007/s13735-022-00256-3 |2 doi | |
035 | |a (DE-627)OLC2080172492 | ||
035 | |a (DE-He213)s13735-022-00256-3-p | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
082 | 0 | 4 | |a 004 |a 660 |a 070 |a 020 |q VZ |
084 | |a 54.87 |2 bkl | ||
084 | |a 54.64 |2 bkl | ||
100 | 1 | |a Zhang, Xiaowei |e verfasserin |4 aut | |
245 | 1 | 0 | |a TCKGE: Transformers with contrastive learning for knowledge graph embedding |
264 | 1 | |c 2022 | |
336 | |a Text |b txt |2 rdacontent | ||
337 | |a ohne Hilfsmittel zu benutzen |b n |2 rdamedia | ||
338 | |a Band |b nc |2 rdacarrier | ||
500 | |a © The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature 2022. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law. | ||
520 | |a Abstract Representation learning of knowledge graphs has emerged as a powerful technique for various downstream tasks. In recent years, numerous research efforts have been made for knowledge graphs embedding. However, previous approaches usually have difficulty dealing with complex multi-relational knowledge graphs due to their shallow network architecture. In this paper, we propose a novel framework named Transformers with Contrastive learning for Knowledge Graph Embedding (TCKGE), which aims to learn complex semantics in multi-relational knowledge graphs with deep architectures. To effectively capture the rich semantics of knowledge graphs, our framework leverages the powerful Transformers to build a deep hierarchical architecture to dynamically learn the embeddings of entities and relations. To obtain more robust knowledge embeddings with our deep architecture, we design a contrastive learning scheme to facilitate optimization by exploring the effectiveness of several different data augmentation strategies. The experimental results on two benchmark datasets show the superior of TCKGE over state-of-the-art models. | ||
650 | 4 | |a Augmentation | |
650 | 4 | |a Contrastive learning | |
650 | 4 | |a Knowledge graph | |
650 | 4 | |a Transformer | |
700 | 1 | |a Fang, Quan |0 (orcid)0000-0003-4190-1529 |4 aut | |
700 | 1 | |a Hu, Jun |4 aut | |
700 | 1 | |a Qian, Shengsheng |4 aut | |
700 | 1 | |a Xu, Changsheng |4 aut | |
773 | 0 | 8 | |i Enthalten in |t International journal of multimedia information retrieval |d Springer London, 2012 |g 11(2022), 4 vom: 27. Nov., Seite 589-597 |w (DE-627)684132834 |w (DE-600)2647391-4 |w (DE-576)9684132832 |x 2192-6611 |7 nnns |
773 | 1 | 8 | |g volume:11 |g year:2022 |g number:4 |g day:27 |g month:11 |g pages:589-597 |
856 | 4 | 1 | |u https://doi.org/10.1007/s13735-022-00256-3 |z lizenzpflichtig |3 Volltext |
912 | |a GBV_USEFLAG_A | ||
912 | |a SYSFLAG_A | ||
912 | |a GBV_OLC | ||
912 | |a SSG-OLC-PHA | ||
936 | b | k | |a 54.87 |j Multimedia |j Multimedia |q VZ |
936 | b | k | |a 54.64 |j Datenbanken |j Datenbanken |q VZ |
951 | |a AR | ||
952 | |d 11 |j 2022 |e 4 |b 27 |c 11 |h 589-597 |