Construction of Cultural Heritage Knowledge Graph Based on Graph Attention Neural Network
<p>Unified data modeling for knowledge information based on knowledge graphs.</p> "> Figure 2
<p>Entity-relationship joint extraction model based on segmental attention fusion mechanism.</p> "> Figure 3
<p>Pooling attention mechanism.</p> "> Figure 4
<p>Segmental attention fusion mechanism.</p> "> Figure 5
<p>Training accuracy trend over 19 epochs.</p> "> Figure 6
<p>Architecture diagram of the unified data model for knowledge information on Tang Dynasty gold and silver artifacts.</p> "> Figure 7
<p>Framework of the Tang Dynasty gold and silver artifacts knowledge retrieval system.</p> "> Figure 8
<p>Knowledge upload interface.</p> "> Figure 9
<p>View knowledge upload status.</p> "> Figure 10
<p>Related entities of “Gilded Musician Pattern Silver Cup”.</p> "> Figure 11
<p>Knowledge comparison.</p> ">
Abstract
:1. Introduction
2. Research Objectives
3. Unified Data Modeling in the Cultural Heritage Knowledge Collection Stage for Design
3.1. Data Feature Analysis
- (1)
- Information diversity: design knowledge encompasses structured, semistructured, and unstructured data. The diverse data forms make integration and processing cumbersome.
- (2)
- Information overload: the sheer volume of design knowledge includes a significant amount of irrelevant or low-quality information, making it challenging for designers to promptly and accurately extract the relevant information they need.
- (3)
- Ambiguity in meaning: different teams and organizations use various standards to describe design knowledge, leading to potential discrepancies in interpretation across different contexts, which increases communication costs.
- (4)
- Dynamic iteration: design knowledge is continuously updated and iterated upon, with its collection progressing alongside the design project’s development and evolving requirements.
3.2. Unified Data Modeling Process for Knowledge Information Based on Knowledge Graphs
3.3. Key Technology Research
4. Data Model Construction
4.1. Entity-Relationship Extraction Model Construction
4.1.1. Embedding Layer
4.1.2. Entity Recognition Layer
4.1.3. Relationship Classification Layer
4.2. Model Training and Optimization
5. Experiment Validation
5.1. Dataset Creation
5.2. Experimental Environment
5.3. Parameter Settings
5.4. Standard Dataset Experimental Results
5.5. Experimental Results on the Tang Dynasty Gold and Silver Artifacts Dataset
6. Knowledge Retrieval of Tang Dynasty Gold and Silver Artifacts
6.1. Overall Functionality
- (1)
- Schema layer: This layer determines the names, entities, attributes, and relationships of Tang Dynasty gold and silver artifacts. It constructs concepts, classifications, hierarchical structures, and ontology models in this domain, expressing various semantic relationships among entities to ensure the completeness and consistency of the knowledge graph data. From the top-level concepts, it further refines the structure to obtain the entities and relationships of gold and silver artifacts.
- (2)
- Data layer: Structured data, such as tables and relational databases, can be directly converted into knowledge graph representations based on the unified data model for Tang Dynasty gold and silver artifacts. Historians and archaeologists have conducted systematic research on Tang Dynasty gold and silver artifacts, mainly documented in reports or published books, which are expressed in a standardized manner and contain dense and rich knowledge. These unstructured data need to be collected and organized manually. The dataset originates from published bibliographies related to Tang Dynasty gold and silver artifacts, with entities, attributes, and semantic relationships annotated.
- (3)
- Knowledge management layer: The knowledge graph effectively represents the complex knowledge extracted and integrated from various sources, facilitating formatted storage and retrieval. To help designers manage knowledge of Tang Dynasty gold and silver artifacts and precisely locate various information, a knowledge management prototype system based on a graph database has been designed. The system functions are composed of four modules: data query, knowledge extraction, knowledge graph visualization, and knowledge base management.
6.2. System Interface
- (1)
- Data transmission: As shown in Figure 8, users can upload knowledge information on the platform. After uploading the knowledge information to the database, the internal model is called to learn and process this knowledge. This process includes knowledge extraction, knowledge fusion, and integration with the existing knowledge graph. Finally, the processed knowledge information is stored in the database. Figure 9 shows the knowledge upload status, displaying the upload progress, including the number of files uploaded, and allows further management and modification of the uploaded knowledge information on this interface.
- (2)
- Knowledge retrieval: As shown in Figure 10, after entering “Gilded Musician Pattern Gold Cup” in the search bar, the cursor points to the “Gilded Musician Pattern Gold Cup” entity. The right side displays the total number of all related entities, which is 26, the types of relationships, which are 7, and all associated entities. Additionally, by selecting any entity in the graph, users can navigate to the pages of other related entities.
- (3)
- Graph comparison: As shown in Figure 11, in the artifact comparison interface, users can search for and locate various types of artifact knowledge in the comparison section. The system calculates the number of entities related to gold and silver artifacts and the number of relationships between them. To visually present the data, the system generates a bar chart showing the top five relationships by count for the compared gold and silver artifacts. The horizontal axis of the bar chart represents the relationships and the vertical axis represents the frequency of each relationship in the corresponding artifacts. Additionally, the names of the gold and silver artifacts associated with specific relationships are listed below the chart to enable quick identification and further analysis by the user.
7. Discussion and Outlook
7.1. Association between Knowledge Graphs and Artifact Information
7.2. Effectiveness of Model Construction
7.3. Digital Innovation in Cultural Heritage Preservation
7.4. Limitations and Outlook
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Yang, J. A Summary of the Research on the cellar Cultural relics of the Tang Dynasty in Hejia Village, Xi’an from an Interdisciplinary Perspective. Cult. Herit. Mus. 2019, 3, 41–48. [Google Scholar] [CrossRef]
- Zhang, M. Cultural Fusion and Evolution in Design of Eight-ridge Cups in the Tang Dynasty. Packag. Eng. Art Ed. 2021, 42, 250–255. [Google Scholar]
- Qi, D.; Shen, Q.; Museum, S.H. Selected Treasures from Hejiacun Tang Hoard; Cultural Relics Publishing House: Beijing, China, 2003. [Google Scholar]
- Qi, D. Reserch on Tang Gold and Silver; Shanghai Ancient Books Publishing House: Shanghai, China, 2022. [Google Scholar]
- Zhang, J.; Qi, D. Ancient Gold and Silver Articles; Cultural Relics Publishing House: Beijing, China, 2008. [Google Scholar]
- Wani, S.A.; Ali, A.; Ganaie, S.A. The digitally preserved old-aged art, culture and artists: An exploration of Google Arts and Culture. PSU Res. Rev. 2019, 3, 111–122. [Google Scholar] [CrossRef]
- Wei, T.; Roche, C.; Papadopoulou, M.; Jia, Y. Using ISO and semantic web standard for building a multilingual terminology e-dictionary: A use case of Chinese ceramic vases. J. Inf. Sci. 2023, 49, 855–870. [Google Scholar] [CrossRef]
- Wu, Z.; Liao, J.; Song, W.; Mao, H.; Huang, Z.; Li, X.; Mao, H. Semantic hyper-graph-based knowledge representation architecture for complex product development. Comput. Ind. 2018, 100, 43–56. [Google Scholar] [CrossRef]
- Liang, C.; Berant, J.; Le, Q.; Forbus, K.D.; Lao, N. Neural symbolic machines: Learning semantic parsers on freebase with weak supervision. arXiv 2016, arXiv:1611.00020. [Google Scholar]
- Li, S. Research on Unified Modeling Technology of Manufacturing Big Data Based on Domain Ontology. Master’s Thesis, Sichuan University, Chengdu, China, 2021. [Google Scholar]
- Shu, J.; Yang, T.; Geng, Y.; Yu, J. A Joint Extraction Method for Overlapping Entity Relationships in the Construction of Electric Power Knowledge Graph. High Volt. Eng. 2023, 1–11. [Google Scholar] [CrossRef]
- Tahsin, M.U.; Shanto, M.S.H.; Rahman, R.M. Combining Natural Language Processing and Federated Learning for Consumer Complaint Analysis: A Case Study on Laptops. SN Comput. Sci. 2023, 4, 537. [Google Scholar] [CrossRef]
- Rodríguez-Barroso, N.; Cámara, E.M.; Collados, J.C.; Luzón, M.V.; Herrera, F. Federated Learning for Exploiting Annotators’ Disagreements in Natural Language Processing. Trans. Assoc. Comput. Linguist. 2024, 12, 724–742. [Google Scholar] [CrossRef]
- Islam, M.; Iqbal, S.; Rahman, S.; Sur, S.I.K.; Mehedi, M.H.K.; Rasel, A.A. A Federated Learning Approach for Text Classification Using NLP. In Proceedings of the Pacific-Rim Symposium on Image and Video Technology, Virtual, 12–14 November 2022. [Google Scholar]
- Huang, Y.; Yu, S.; Chu, J.; Su, Z.; Wang, H.; Cong, Y.; Fan, H. Knowledlge extraction and knowledge graph construction for conceptual product design based on joint learning. Comput. Integr. Manuf. Syst. 2023, 29, 2313–2326. [Google Scholar] [CrossRef]
- Li, C.; Hou, X.; Qiao, X. A Low-Resource Named Entity Recognition Method for Cultural Heritage Field Incorporating Knowledge Fusion. Acta Sci. Nat. Univ. Pekin. 2024, 60, 13–22. [Google Scholar] [CrossRef]
- Liu, B.; Guan, W.; Yang, C.; Fang, Z.; Lu, Z. Transformer and graph convolutional network for text classification. Int. J. Comput. Intell. Syst. 2023, 16, 161. [Google Scholar] [CrossRef]
- Ullah, I.; Manzo, M.; Shah, M.; Madden, M.G. Graph convolutional networks: Analysis, improvements and results. Appl. Intell. 2022, 52, 9033–9044. [Google Scholar] [CrossRef]
- Senior, H.; Slabaugh, G.; Yuan, S.; Rossi, L. Graph neural networks in vision-language image understanding: A survey. Vis. Comput. 2024, 1–26. [Google Scholar] [CrossRef]
- Li, Z.; Zhao, Y.; Zhang, Y.; Zhang, Z. Multi-relational graph attention networks for knowledge graph completion. Knowl.-Based Syst. 2022, 251, 109262. [Google Scholar] [CrossRef]
- Peng, S.; Chen, G.; Cao, L.; Zeng, R.; Zhou, Y.; Li, X. Negative Emotion Recognition Method Based on Rational Graph Attention Network and Broad Learning. In Proceedings of the 21st Chinese National Conference on Computational Linguistics, Nanchang, China, 14–16 October 2022. [Google Scholar]
- Asudani, D.S.; Nagwani, N.K.; Singh, P. Impact of word embedding models on text analytics in deep learning environment: A review. Artif. Intell. Rev. 2023, 56, 10345–10425. [Google Scholar] [CrossRef]
- Kanakarajan, K.R.; Kundumani, B.; Sankarasubbu, M. BioELECTRA: Pretrained biomedical text encoder using discriminators. In Proceedings of the 20th Workshop on Biomedical Language Processing, Online, 11 June 2021. [Google Scholar]
- Rouabhi, R.; Hammami, N.E.; Azizi, N.; Benzebouchi, N.E.; Chaib, R. Multi-label Textual Data Augmentation Using BERT Based on Transformer Model. In Proceedings of the International Conference on Computing and Information Technology, Hammamet, Tunisia, 22–26 December 2023. [Google Scholar]
- Kim, Y.; Kim, J.-H.; Lee, J.M.; Jang, M.J.; Yum, Y.J.; Kim, S.; Shin, U.; Kim, Y.-M.; Joo, H.J.; Song, S. Author Correction: A pre-trained BERT for Korean medical natural language processing. Sci. Rep. 2023, 13, 9290. [Google Scholar] [CrossRef]
- Devlin, J.; Chang, M.-W.; Lee, K.; Toutanova, K. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv 2018, arXiv:1810.04805. [Google Scholar]
- Manning, C.D.; Surdeanu, M.; Bauer, J.; Finkel, J.R.; Bethard, S.; McClosky, D. The Stanford CoreNLP natural language processing toolkit. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations, Baltimore, MD, USA, 23–24 June 2014. [Google Scholar]
- Brody, S.; Alon, U.; Yahav, E. How attentive are graph attention networks? arXiv 2021, arXiv:2105.14491. [Google Scholar]
- Veličković, P.; Cucurull, G.; Casanova, A.; Romero, A.; Lio, P.; Bengio, Y. Graph attention networks. arXiv 2017, arXiv:1710.10903. [Google Scholar]
- Zhang, M. Neural Attention: Enhancing QKV Calculation in Self-Attention Mechanism with Neural Networks. arXiv 2023, arXiv:2310.11398. [Google Scholar]
- Ji, D.; Gao, J.; Fei, H.; Teng, C.; Ren, Y. A deep neural network model for speakers coreference resolution in legal texts. Inf. Process. Manag. 2020, 57, 102365. [Google Scholar] [CrossRef]
- Lin, M.; Chen, Q.; Yan, S. Network in network. arXiv 2013, arXiv:1312.4400. [Google Scholar]
- Ji, B.; Yu, J.; Li, S.; Ma, J.; Wu, Q.; Tan, Y.; Liu, H. Span-based joint entity and relation extraction with attention-based span-specific and contextual semantic representations. In Proceedings of the 28th International Conference on Computational Linguistics, Barcelona, Spain, 8–13 December 2020. [Google Scholar]
- Dai, Y.; Gieseke, F.; Oehmcke, S.; Wu, Y.; Barnard, K. Attentional feature fusion. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Virtual, 5–9 January 2021. [Google Scholar]
- Staudemeyer, R.C.; Morris, E.R. Understanding LSTM--a tutorial into long short-term memory recurrent neural networks. arXiv 2019, arXiv:1909.09586. [Google Scholar]
- Giorgi, J.; Wang, X.; Sahar, N.; Shin, W.Y.; Bader, G.D.; Wang, B. End-to-end named entity recognition and relation extraction using pre-trained language models. arXiv 2019, arXiv:1912.13415. [Google Scholar]
- Group, National Treasure Archives Section. National Treasure Archives Jade Ceramics Gold and Silver Case; China Democracy and Legal Publishing House: Beijing, China, 2009. [Google Scholar]
- Zhang, J. Ancient Gold and Silver Wares in Northern Grassland of China; Cultural Relics Publishing House: Beijing, China, 2005. [Google Scholar]
- Li, B. National Treasure Collection of Rare CulturalRelics of Shaanxi Province; Shaanxi People’s Education Press: Xi’an, China, 1998. [Google Scholar]
- Peng, Q.; Wei, L.; Geng, B.; Ma, C.; Liu, J. The Essence of Chinese Cultural Relics; Taiwan Business Press: Taipei, Taiwan, 1993. [Google Scholar]
- Ji, D.; Tan, Q. An Appraisal of the National Treasuresin the Shaanxi History Museum; Sanqin Publishing House: Xi’an, China, 2006. [Google Scholar]
- Shaanxi Provincial Institute of Archaeology; Famen Temple Museum. The Beauty of Chinese Archaeological Artifacts: Buddhist Treasures and Tang Dynasty Relics from the Underground Palace of Famen Temple, Fufeng, Shaanxi; Cultural Relics Publishing House: Beijing, China, 1994. [Google Scholar]
- Shaanxi Provincial Institute of Archaeology; Famen Temple Museum; Baoji Municipal Bureau of Cultural Relics; Fufeng County Museum. Archaeological Excavation Report on Famen Temple; Cultural Relics Publishing House: Beijing, China, 2007. [Google Scholar]
- Shi, X. Precious Cultural Relics in the Crypt of Famen Temple; Shaanxi People’s Fine Arts Publishing House: Xi’an, China, 1988. [Google Scholar]
- Shaanxi History Museum; Hou, N.; Shen, Q. Treasures of the Tang Dynasty: The Hejiacun Cellar; Cultural Relics Publishing House: Beijing, China, 2021. [Google Scholar]
- Han, P.; Kocielnik, R.; Saravanan, A.; Jiang, R.; Sharir, O.; Anandkumar, A. ChatGPT Based Data Augmentation for Improved Parameter-Efficient Debiasing of LLMs. arXiv 2024, arXiv:2402.11764. [Google Scholar]
- Dai, H.; Liu, Z.; Liao, W.; Huang, X.; Cao, Y.; Wu, Z.; Zhao, L.; Xu, S.; Liu, W.; Liu, N. Auggpt: Leveraging chatgpt for text data augmentation. arXiv 2023, arXiv:2302.13007. [Google Scholar]
- Zhao, H.; Chen, H.; Ruggles, T.A.; Feng, Y.; Singh, D.; Yoon, H.-J. Improving Text Classification with Large Language Model-Based Data Augmentation. Electronics 2024, 13, 2535. [Google Scholar] [CrossRef]
- Meyer, J.G.; Urbanowicz, R.J.; Martin, P.C.; O’Connor, K.; Li, R.; Peng, P.-C.; Bright, T.J.; Tatonetti, N.; Won, K.J.; Gonzalez-Hernandez, G. ChatGPT and large language models in academia: Opportunities and challenges. BioData Min. 2023, 16, 20. [Google Scholar] [CrossRef]
- Keshamoni, K. ChatGPT: An Advanceds Natural Language Processing System for Conversational AI Applications—A Comprehensive Review and Comparative Analysis with Other Chatbots and NLP Models. In Proceedings of the International Conference on ICT for Sustainable Development, Goa, India, 3–4 August 2023. [Google Scholar]
- Yang, J.; Jin, H.; Tang, R.; Han, X.; Feng, Q.; Jiang, H.; Zhong, S.; Yin, B.; Hu, X. Harnessing the power of llms in practice: A survey on chatgpt and beyond. ACM Trans. Knowl. Discov. Data 2024, 18, 1–32. [Google Scholar] [CrossRef]
- Sun, X.; Dong, L.; Li, X.; Wan, Z.; Wang, S.; Zhang, T.; Li, J.; Cheng, F.; Lyu, L.; Wu, F. Pushing the limits of chatgpt on nlp tasks. arXiv 2023, arXiv:2306.09719. [Google Scholar]
- Adnan, K.; Akbar, R.; Wang, K.S. Usability enhancement model for unstructured text in big data. J. Big Data 2023, 10, 168. [Google Scholar] [CrossRef]
- Zheng, H.; Wen, R.; Chen, X.; Yang, Y.; Zhang, Y.; Zhang, Z.; Zhang, N.; Qin, B.; Xu, M.; Zheng, Y. PRGC: Potential relation and global correspondence based joint relational triple extraction. arXiv 2021, arXiv:2106.09895. [Google Scholar]
- Gardent, C.; Shimorina, A.; Narayan, S.; Beltrachini, L.P. The WebNLG challenge: Generating text from RDF data. In Proceedings of the 10th International Conference on Natural Language Generation, Santiago de Compostela, Spain, 4–7 September 2017. [Google Scholar]
- Yuan, Y.; Zhou, X.; Pan, S.; Zhu, Q.; Song, Z.; Guo, L. A relation-specific attention network for joint entity and relation extraction. In Proceedings of the International Joint Conference on Artificial Intelligence, Yokohama, Japan, 7–15 January 2021. [Google Scholar]
Name | Text | Entity | Relationship | Entity |
---|---|---|---|---|
Gilded Musician Pattern Silver Cup | The gilded musician pattern silver cup has an octagonal body, with each edge slightly concave in the middle and arched at both ends, forming a curved beaded pattern that adds a touch of softness to the otherwise rigid lines of the shape. Each of the eight curved surfaces of the cup body features a musician: four are playing instruments like the pan flute, small drum, vertical flute, or pipa, while the other four are dancing with sleeves, holding a pot, or holding a cup. All eight musicians are depicted as Hu people, adorned with flat-chiseled scrolling vines, mountains, and birds, and fish-egg patterns as background. The bottom of the cup is bordered with a string of beads and turns inward to form a round base, connecting to a trumpet-shaped short ring foot, which is also decorated with bead patterns. The base and the ring foot are chiseled with floral and cloud patterns, and filled with fish-egg patterns. One side of the cup has a circular handle decorated with two relief heads of Hu people with deep-set eyes, broad noses, and long beards, facing each other at the back of their heads. This feature is also seen in Sogdian silverware: the outside of the ring handle has a three-dimensional animal head, and the hook tail of the handle is welded to the cup body. The inner and outer walls of the cup are fully gilded. The strong exotic style indicates that this gilded musician pattern silver cup is a piece of Sogdian silverware. | Gilded Musician Pattern Silver Cup | Shape | Octagonal |
Gilded Musician Pattern Silver Cup | Structure | Eight curved surfaces | ||
Gilded Musician Pattern Silver Cup | Decoration | Musicians | ||
Octagonal | Decoration | Curved beaded pattern | ||
Musicians | Shape | Pan flute | ||
Musicians | Shape | Small drum | ||
Musicians | Shape | Vertical flute | ||
Musicians | Shape | Pipa | ||
Musicians | Shape | Dancing with sleeves | ||
Musicians | Shape | Holding a pot | ||
Musicians | Shape | Holding a cup | ||
Musicians | Shape | Hu people | ||
Gilded Musician Pattern Silver Cup | Decorative Technique | Flat-chiseling | ||
Gilded Musician Pattern Silver Cup | Decoration | Scrolling vines | ||
Gilded Musician Pattern Silver Cup | Decoration | Mountains and birds | ||
Gilded Musician Pattern Silver Cup | Decoration | Fish-egg pattern | ||
Gilded Musician Pattern Silver Cup | Decoration | Background filling | ||
Gilded Musician Pattern Silver Cup | Decoration | Beaded pattern | ||
Gilded Musician Pattern Silver Cup | Decoration | Round base | ||
Gilded Musician Pattern Silver Cup | Decoration | Trumpet-shaped foot | ||
Gilded Musician Pattern Silver Cup | Decoration | Floral and cloud patterns | ||
Gilded Musician Pattern Silver Cup | Part | Circular handle | ||
Circular handle | Part | Finger rest | ||
Gilded Musician Pattern Silver Cup | Cultural Origin | Sogdian silverware | ||
Circular handle | Structure | Outside of the handle |
NYT | Tang Dynasty Gold and Silver Artifacts | |
---|---|---|
Training Set | 56,195 | 122,442 |
Validation Set | 4999 | 15,922 |
Test Set | 5000 | 25,109 |
Normal | 3266 | 14,220 |
SEO | 1297 | 6440 |
EPO | 978 | 10 |
Relationships | 14 | 122,442 |
Learning Rate | Max Epoch | Batch Size | Seed |
---|---|---|---|
1 × 10−5 | 30 | 8 | 42 |
NYT | |||
---|---|---|---|
Prec | Rec | F1 | |
CopyRE | 59.6 | 54.6 | 57.0 |
GraphRel | 62.7 | 60.1 | 61.3 |
CopyMTL | 71.3 | 68.5 | 69.9 |
RSAN | 71.7 | 87.1 | 78.7 |
our-model | 85.4 | 85.5 | 85.2 |
Dataset | Model | Normal | EPO | SEO |
---|---|---|---|---|
NYT | CopyRE | 63.0 | 42.8 | 50.5 |
GraphRel | 63.1 | 50.8 | 59.7 | |
CopyMTL | 71.3 | 56.8 | 68.4 | |
RSAN | 75.3 | 84.9 | 75.3 | |
Our-model | 85.3 | 85.3 | 81.5 |
Dataset | Model | N = 1 | N = 2 | N = 3 | N = 4 | N ≥ 5 |
---|---|---|---|---|---|---|
NYT | CopyRE | 67.0 | 56.2 | 51.2 | 47.2 | 25.8 |
GraphRel | 63.7 | 64.6 | 58.9 | 55.2 | 47.1 | |
CopyMTL | 71.2 | 71.3 | 70.3 | 73.1 | 48.9 | |
RSAN | 73.3 | 82.1 | 82.7 | 84.5 | 76.4 | |
Our-model | 84.1 | 85.4 | 86.1 | 85.5 | 85.3 |
SEO | EPO | All | ||||||
---|---|---|---|---|---|---|---|---|
Prec | Rec | F1 | Prec | Rec | F1 | Prec | Rec | F1 |
84.05 | 84.03 | 84.9 | 85.6 | 86.1 | 85.6 | 84 | 84 | 84.9 |
Metric | Value |
---|---|
Training Accuracy | 0.8506 |
Validation Accuracy | 0.8050 |
Epochs | 19 |
Time per Epoch (seconds) | 996 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Wang, Y.; Liu, J.; Wang, W.; Chen, J.; Yang, X.; Sang, L.; Wen, Z.; Peng, Q. Construction of Cultural Heritage Knowledge Graph Based on Graph Attention Neural Network. Appl. Sci. 2024, 14, 8231. https://doi.org/10.3390/app14188231
Wang Y, Liu J, Wang W, Chen J, Yang X, Sang L, Wen Z, Peng Q. Construction of Cultural Heritage Knowledge Graph Based on Graph Attention Neural Network. Applied Sciences. 2024; 14(18):8231. https://doi.org/10.3390/app14188231
Chicago/Turabian StyleWang, Yi, Jun Liu, Weiwei Wang, Jian Chen, Xiaoyan Yang, Lijuan Sang, Zhiqiang Wen, and Qizhao Peng. 2024. "Construction of Cultural Heritage Knowledge Graph Based on Graph Attention Neural Network" Applied Sciences 14, no. 18: 8231. https://doi.org/10.3390/app14188231