[go: up one dir, main page]

Skip to main content

Two-View Graph Neural Networks for Knowledge Graph Completion

  • Conference paper
  • First Online:
The Semantic Web (ESWC 2023)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13870))

Included in the following conference series:

Abstract

We present an effective graph neural network (GNN)-based knowledge graph embedding model, which we name WGE, to capture entity- and relation-focused graph structures. Given a knowledge graph, WGE builds a single undirected entity-focused graph that views entities as nodes. WGE also constructs another single undirected graph from relation-focused constraints, which views entities and relations as nodes. WGE then proposes a GNN-based architecture to better learn vector representations of entities and relations from these two single entity- and relation-focused graphs. WGE feeds the learned entity and relation representations into a weighted score function to return the triple scores for knowledge graph completion. Experimental results show that WGE outperforms strong baselines on seven benchmark datasets for knowledge graph completion.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    https://github.com/tsafavi/codex [28].

  2. 2.

    https://github.com/GenetAsefa/LiterallyWikidata [11].

  3. 3.

    Note that the experimental setup is the same for both QuatE and WGE for a fair comparison as WGE uses QuatE for decoding. Zhang et al. [39] reported MRR at 0.348 and Hits@10 at 55.0% on FB15K-237 for QuatE. However, we could not reproduce those scores.

  4. 4.

    Our training protocol monitors the MRR score on the validation set to select the best model checkpoint.

References

  1. Balažević, I., Allen, C., Hospedales, T.M.: TuckER: tensor factorization for knowledge graph completion. In: EMNLP, pp. 5185–5194 (2019)

    Google Scholar 

  2. Berant, J., Chou, A., Frostig, R., Liang, P.: Semantic parsing on freebase from question-answer pairs. In: EMNLP, pp. 1533–1544 (2013)

    Google Scholar 

  3. Bordes, A., Usunier, N., García-Durán, A., Weston, J., Yakhnenko, O.: Translating embeddings for modeling multi-relational data. In: NIPS, pp. 2787–2795 (2013)

    Google Scholar 

  4. Bordes, A., Weston, J., Collobert, R., Bengio, Y.: Learning structured embeddings of knowledge bases. In: AAAI, pp. 301–306 (2011)

    Google Scholar 

  5. Chen, X., Zhou, Z., Gao, M., Shi, D., Husen, M.N.: Knowledge representation combining quaternion path integration and depth-wise atrous circular convolution. In: UAI, pp. 336–345 (2022)

    Google Scholar 

  6. Demir, C., Ngomo, A.-C.N.: Convolutional complex knowledge graph embeddings. In: Verborgh, R., et al. (eds.) ESWC 2021. LNCS, vol. 12731, pp. 409–424. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-77385-4_24

    Chapter  Google Scholar 

  7. Dettmers, T., Minervini, P., Stenetorp, P., Riedel, S.: Convolutional 2D knowledge graph embeddings. In: AAAI, pp. 1811–1818 (2018)

    Google Scholar 

  8. Dutta, S., Weikum, G.: Cross-document co-reference resolution using sample-based clustering with knowledge enrichment. Trans. ACL 3, 15–28 (2015)

    Google Scholar 

  9. Fader, A., Zettlemoyer, L., Etzioni, O.: Open question answering over curated and extracted knowledge bases. In: KDD, pp. 1156–1165 (2014)

    Google Scholar 

  10. Ferrucci, D.A.: Introduction to “this is Watson’’. IBM J. Res. Dev. 56(3), 235–249 (2012)

    Google Scholar 

  11. Gesese, G.A., Alam, M., Sack, H.: LiterallyWikidata - a benchmark for knowledge graph completion using literals. In: Hotho, A., et al. (eds.) ISWC 2021. LNCS, vol. 12922, pp. 511–527. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-88361-4_30

    Chapter  Google Scholar 

  12. Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. In: AISTATS, pp. 249–256 (2010)

    Google Scholar 

  13. Hamilton, W.L., Ying, R., Leskovec, J.: Inductive representation learning on large graphs. In: NeurIPS (2017)

    Google Scholar 

  14. Hamilton, W.L., Ying, R., Leskovec, J.: Representation learning on graphs: methods and applications. IEEE Data Eng. Bull. 40(3), 52–74 (2018)

    Google Scholar 

  15. Hamilton, W.R.: On quaternions; or on a new system of imaginaries in algebra. London Edinb. Dublin Philos. Mag. J. Sci. 25(163), 10–13 (1844)

    Article  Google Scholar 

  16. Kingma, D., Ba, J.: Adam: a method for stochastic optimization. In: ICLR (2015)

    Google Scholar 

  17. Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. In: ICLR (2017)

    Google Scholar 

  18. Krishnamurthy, J., Mitchell, T.: Weakly supervised training of semantic parsers. In: EMNLP-CoNLL, pp. 754–765 (2012)

    Google Scholar 

  19. Levi, F.W.: Finite Geometrical Systems: Six Public Lectues Delivered in February, 1940, at the University of Calcutta. University of Calcutta (1942)

    Google Scholar 

  20. Nguyen, D.Q., Nguyen, D.Q., Nguyen, T.D., Phung, D.: Convolutional neural network-based model for knowledge base completion and its application to search personalization. Semant. Web 10(5), 947–960 (2019)

    Article  Google Scholar 

  21. Nguyen, D.Q., Nguyen, T.D., Phung, D.: Quaternion graph neural networks. In: ACML (2021)

    Google Scholar 

  22. Nguyen, D.Q., Tong, V., Phung, D., Nguyen, D.Q.: Node co-occurrence based graph neural networks for knowledge graph link prediction. In: WSDM, pp. 1589–1592 (2022)

    Google Scholar 

  23. Nguyen, D.Q.: A survey of embedding models of entities and relationships for knowledge graph completion. In: TextGraphs, pp. 1–14 (2020)

    Google Scholar 

  24. Nickel, M., Rosasco, L., Poggio, T.: Holographic embeddings of knowledge graphs. In: AAAI, pp. 1955–1961 (2016)

    Google Scholar 

  25. Parcollet, T., Morchid, M., Linarès, G.: A survey of quaternion neural networks. Artif. Intell. Rev. 53, 2957–2982 (2020)

    Article  Google Scholar 

  26. Paszke, A., Gross, S., Massa, F., et al.: Pytorch: an imperative style, high-performance deep learning library. In: NeurIPS, pp. 8024–8035 (2019)

    Google Scholar 

  27. Ponzetto, S.P., Strube, M.: Exploiting semantic role labeling, wordnet and Wikipedia for coreference resolution. In: NAACL, pp. 192–199 (2006)

    Google Scholar 

  28. Safavi, T., Koutra, D.: CoDEx: a comprehensive knowledge graph completion benchmark. In: EMNLP, pp. 8328–8350 (2020)

    Google Scholar 

  29. Scarselli, F., Gori, M., Tsoi, A.C., Hagenbuchner, M., Monfardini, G.: The graph neural network model. IEEE Trans. Neural Netw. 20(1), 61–80 (2009)

    Article  Google Scholar 

  30. Schlichtkrull, M., Kipf, T.N., Bloem, P., van den Berg, R., Titov, I., Welling, M.: Modeling relational data with graph convolutional networks. In: Gangemi, A., et al. (eds.) ESWC 2018. LNCS, vol. 10843, pp. 593–607. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-93417-4_38

    Chapter  Google Scholar 

  31. Shang, C., Tang, Y., Huang, J., Bi, J., He, X., Zhou, B.: End-to-end structure-aware convolutional networks for knowledge base completion. In: AAAI, pp. 3060–3067 (2019)

    Google Scholar 

  32. Toutanova, K., Chen, D.: Observed versus latent features for knowledge base and text inference. In: CVSC, pp. 57–66 (2015)

    Google Scholar 

  33. Trouillon, T., Welbl, J., Riedel, S., Gaussier, É., Bouchard, G.: Complex embeddings for simple link prediction. In: ICML, pp. 2071–2080 (2016)

    Google Scholar 

  34. Vashishth, S., Sanyal, S., Nitin, V., Talukdar, P.: Composition-based multi-relational graph convolutional networks. In: ICLR (2020)

    Google Scholar 

  35. Veličković, P., Cucurull, G., Casanova, A., Romero, A., Liò, P., Bengio, Y.: Graph attention networks. In: ICLR (2018)

    Google Scholar 

  36. Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Yu, P.S.: A comprehensive survey on graph neural networks. IEEE Trans. Neural Netw. Learn. Syst. 32(1), 4–24 (2021)

    Article  MathSciNet  Google Scholar 

  37. Yang, B., Yih, W.T., He, X., Gao, J., Deng, L.: Embedding entities and relations for learning and inference in knowledge bases. In: ICLR (2015)

    Google Scholar 

  38. Zhang, D., Yin, J., Zhu, X., Zhang, C.: Network representation learning: a survey. IEEE Trans. Big Data 6, 3–28 (2020)

    Article  Google Scholar 

  39. Zhang, S., Tay, Y., Yao, L., Liu, Q.: Quaternion knowledge graph embeddings. In: NeurIPS, pp. 2731–2741 (2019)

    Google Scholar 

  40. Zhang, Y., Yao, Q., Dai, W., Chen, L.: AutoSF: searching scoring functions for knowledge graph embedding. In: ICDE, pp. 433–444 (2020)

    Google Scholar 

Download references

Acknowledgment

Most of this work was done while Vinh Tong was a research resident at VinAI Research, Vietnam.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Dat Quoc Nguyen .

Editor information

Editors and Affiliations

Appendix

Appendix

The hyper-complex vector space has recently been considered on the Quaternion space [15] consisting of one real and three separate imaginary axes. It provides highly expressive computations through the Hamilton product compared to the Euclidean and complex vector spaces. We provide key notations and operations related to the Quaternion space required for our later development. Additional details can further be found in [25].

A quaternion \(q \in \mathbb {H}\) is a hyper-complex number consisting of one real and three separate imaginary components [15] defined as:

$$\begin{aligned} q = q_r + q_i\boldsymbol{\textsf{i}} + q_j\boldsymbol{\textsf{j}} + q_k\boldsymbol{\textsf{k}} \end{aligned}$$
(17)

where \(q_r, q_i, q_j, q_k \in \mathbb {R}\), and \(\boldsymbol{\textsf{i}}, \boldsymbol{\textsf{j}}, \boldsymbol{\textsf{k}}\) are imaginary units that \(\boldsymbol{\textsf{i}}^2 = \boldsymbol{\textsf{j}}^2 = \boldsymbol{\textsf{k}}^2 = \boldsymbol{\textsf{i}}\boldsymbol{\textsf{j}}\boldsymbol{\textsf{k}} = -1\). The operations for the Quaternion algebra are defined as follows:

Addition. The addition of two quaternions q and p is defined as:

$$\begin{aligned} q + p = (q_r + p_r) + (q_i + p_i) \boldsymbol{\textsf{i}} + (q_j + p_j)\boldsymbol{\textsf{j}} + (q_k + p_k)\boldsymbol{\textsf{k}} \end{aligned}$$
(18)

Norm. The norm \(\Vert q\Vert \) of a quaternion q is computed as:

$$\begin{aligned} \Vert q\Vert = \sqrt{q_r^2 + q_i^2 + q_j^2 + q_k^2} \end{aligned}$$
(19)

And the normalized or unit quaternion \(q^\triangleleft \) is defined as: \(q^\triangleleft = \frac{q}{\Vert q\Vert }\)

Scalar Multiplication. The multiplication of a scalar \(\lambda \) and q is computed as follows:

$$\begin{aligned} \lambda q = \lambda q_r + \lambda q_i\boldsymbol{\textsf{i}} + \lambda q_j\boldsymbol{\textsf{j}} + \lambda q_k\boldsymbol{\textsf{k}} \end{aligned}$$
(20)

Conjugate. The conjugate \(q^*\) of a quaternion q is defined as:

$$\begin{aligned} q^*= q_r - q_i\boldsymbol{\textsf{i}} - q_j\boldsymbol{\textsf{j}} - q_k\boldsymbol{\textsf{k}} \end{aligned}$$
(21)

Hamilton Product. The Hamilton product \(\otimes \) (i.e., the quaternion multiplication) of two quaternions q and p is defined as:

$$\begin{aligned} \begin{array}{llr} q \otimes p &{}=&{}\qquad (q_r p_r - q_i p_i - q_j p_j - q_k p_k) \\ &{}+&{}\qquad (q_i p_r + q_r p_i - q_k p_j + q_j p_k)\boldsymbol{\textsf{i}} \\ &{}+&{}\qquad (q_j p_r + q_k p_i + q_r p_j - q_i p_k)\boldsymbol{\textsf{j}} \\ &{}+&{}\qquad (q_k p_r - q_j p_i + q_i p_j + q_r p_k)\boldsymbol{\textsf{k}} \end{array} \end{aligned}$$
(22)

We can express the Hamilton product of q and p in the following form:

$$\begin{aligned} q \otimes p = \begin{bmatrix} 1\\ \boldsymbol{\textsf{i}}\\ \boldsymbol{\textsf{j}}\\ \boldsymbol{\textsf{k}} \end{bmatrix}^\top \begin{bmatrix} q_r &{} -q_i &{} -q_j &{} -q_k\\ q_i &{} q_r &{} -q_k &{} q_j\\ q_j &{} q_k &{} q_r &{} -q_i\\ q_k &{} -q_j &{} q_i &{} q_r \end{bmatrix} \begin{bmatrix} p_r\\ p_i\\ p_j\\ p_k \end{bmatrix} \end{aligned}$$
(23)

The Hamilton product of two quaternion vectors \(\boldsymbol{q}\) and \(\boldsymbol{p} \in \mathbb {H}^n\) is computed as:

$$\begin{aligned} \begin{array}{llr} \boldsymbol{q} \otimes \boldsymbol{p} &{}=&{}\qquad (\boldsymbol{q}_r \circ \boldsymbol{p}_r - \boldsymbol{q}_i \circ \boldsymbol{p}_i - \boldsymbol{q}_j \circ \boldsymbol{p}_j - \boldsymbol{q}_k \circ \boldsymbol{p}_k) \\ &{}+&{}\qquad (\boldsymbol{q}_i \circ \boldsymbol{p}_r + \boldsymbol{q}_r \circ \boldsymbol{p}_i - \boldsymbol{q}_k \circ \boldsymbol{p}_j + \boldsymbol{q}_j \circ \boldsymbol{p}_k)\boldsymbol{\textsf{i}} \\ &{}+&{}\qquad (\boldsymbol{q}_j \circ \boldsymbol{p}_r + \boldsymbol{q}_k \circ \boldsymbol{p}_i + \boldsymbol{q}_r \circ \boldsymbol{p}_j - \boldsymbol{q}_i \circ \boldsymbol{p}_k)\boldsymbol{\textsf{j}} \\ &{}+&{}\qquad (\boldsymbol{q}_k \circ \boldsymbol{p}_r - \boldsymbol{q}_j \circ \boldsymbol{p}_i + \boldsymbol{q}_i \circ \boldsymbol{p}_j + \boldsymbol{q}_r \circ \boldsymbol{p}_k)\boldsymbol{\textsf{k}} \end{array} \end{aligned}$$
(24)

where \(\circ \) denotes the element-wise product. We note that the Hamilton product is not commutative, i.e., \(q \otimes p \ne p \otimes q\).

We can derived a product of a quaternion matrix \(\boldsymbol{W} \in \mathbb {H}^{m \times n}\) and a quaternion vector \(\boldsymbol{p} \in \mathbb {H}^{n}\) from Eq. 23 as follow:

$$\begin{aligned} \boldsymbol{W} \otimes \boldsymbol{p} = \begin{bmatrix} 1\\ \boldsymbol{\textsf{i}}\\ \boldsymbol{\textsf{j}}\\ \boldsymbol{\textsf{k}} \end{bmatrix}^\top \begin{bmatrix} \boldsymbol{W}_r &{} -\boldsymbol{W}_i &{} -\boldsymbol{W}_j &{} -\boldsymbol{W}_k\\ \boldsymbol{W}_i &{} \boldsymbol{W}_r &{} -\boldsymbol{W}_k &{} \boldsymbol{W}_j\\ \boldsymbol{W}_j &{} \boldsymbol{W}_k &{} \boldsymbol{W}_r &{} -\boldsymbol{W}_i\\ \boldsymbol{W}_k &{} -\boldsymbol{W}_j &{} \boldsymbol{W}_i &{} \boldsymbol{W}_r \end{bmatrix} \begin{bmatrix} \boldsymbol{p}_r\\ \boldsymbol{p}_i\\ \boldsymbol{p}_j\\ \boldsymbol{p}_k \end{bmatrix} \end{aligned}$$
(25)

where \(\boldsymbol{p}_r\), \(\boldsymbol{p}_i\), \(\boldsymbol{p}_j\), and \(\boldsymbol{p}_k \in \mathbb {R}^n\) are real vectors; and \(\boldsymbol{W}_r\), \(\boldsymbol{W}_i\), \(\boldsymbol{W}_j\), and \(\boldsymbol{W}_k \in \mathbb {R}^{m \times n}\) are real matrices.

Quaternion-Inner Product. The quaternion-inner product \(\bullet \) of two quaternion vectors \(\boldsymbol{q}\) and \(\boldsymbol{p} \in \mathbb {H}^n\) returns a scalar as:

$$\begin{aligned} \boldsymbol{q} \bullet \boldsymbol{p} = \boldsymbol{q}_{r}^\textsf {T}\boldsymbol{p}_{r} + \boldsymbol{q}_{i}^\textsf {T}\boldsymbol{p}_{i} + \boldsymbol{q}_{j}^\textsf {T}\boldsymbol{p}_{j} + \boldsymbol{q}_{k}^\textsf {T}\boldsymbol{p}_{k} \end{aligned}$$
(26)

Quaternion Element-Wise Product. We further define the element-wise product of two quaternions vector \(\boldsymbol{q}\) and \(\boldsymbol{p} \in \mathbb {H}^n\) as follow:

$$\begin{aligned} \boldsymbol{p} * \boldsymbol{q} = (\boldsymbol{q}_r \circ \boldsymbol{p}_r) + (\boldsymbol{q}_i \circ \boldsymbol{p}_i) \boldsymbol{\textsf{i}} + (\boldsymbol{q}_j \circ \boldsymbol{p}_j) \boldsymbol{\textsf{j}} + (\boldsymbol{q}_k \circ \boldsymbol{p}_k) \boldsymbol{\textsf{k}} \end{aligned}$$
(27)

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Tong, V., Nguyen, D.Q., Phung, D., Nguyen, D.Q. (2023). Two-View Graph Neural Networks for Knowledge Graph Completion. In: Pesquita, C., et al. The Semantic Web. ESWC 2023. Lecture Notes in Computer Science, vol 13870. Springer, Cham. https://doi.org/10.1007/978-3-031-33455-9_16

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-33455-9_16

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-33454-2

  • Online ISBN: 978-3-031-33455-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics