CN117875374A - Graph representation learning method and device based on generation of countermeasure network - Google Patents
Graph representation learning method and device based on generation of countermeasure network Download PDFInfo
- Publication number
- CN117875374A CN117875374A CN202311632050.7A CN202311632050A CN117875374A CN 117875374 A CN117875374 A CN 117875374A CN 202311632050 A CN202311632050 A CN 202311632050A CN 117875374 A CN117875374 A CN 117875374A
- Authority
- CN
- China
- Prior art keywords
- node
- embedding
- discriminator
- generator
- countermeasure
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0475—Generative networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/335—Filtering based on additional data, e.g. user or group profiles
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/35—Clustering; Classification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/901—Indexing; Data structures therefor; Storage structures
- G06F16/9024—Graphs; Linked lists
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/094—Adversarial learning
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Software Systems (AREA)
- Databases & Information Systems (AREA)
- Computational Linguistics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Image Analysis (AREA)
Abstract
Description
技术领域Technical Field
本发明涉及一种图结构数据的表示学习方法和装置。The present invention relates to a representation learning method and device for graph structure data.
背景技术Background technique
现实世界中存在着各种错综复杂的关系,它们构成了许多庞大的关系图。为了更好地从这些图结构的数据中发掘有价值的信息,机器学习领域常使用图表示学习的方法。该类方法将图中构成节点信息的数据转换为低维向量形式,称为嵌入表示,从而应用于节点分类、链接预测、推荐等下游任务。该类方法也被称为图嵌入、网络表示学习、网络嵌入方法。There are all kinds of intricate relationships in the real world, which constitute many huge relationship graphs. In order to better discover valuable information from these graph-structured data, graph representation learning methods are often used in the field of machine learning. This type of method converts the data that constitutes the node information in the graph into a low-dimensional vector form, called an embedding representation, which is then applied to downstream tasks such as node classification, link prediction, and recommendation. This type of method is also called graph embedding, network representation learning, and network embedding method.
图表示学习方法根据学习过程的不同,大致可分成监督图表示学习和无监督图表示学习。最先进的图表示学习方法通常采用基于图神经网络的监督模型,或是采用基于随机游走或自动编码器的无监督模型。监督模型需要数据中有大量可靠的标签信息;无监督模型对数据集没有严格要求,但其学习得到的嵌入表示和监督模型的相比,表达的信息的准确率较低。Graph representation learning methods can be roughly divided into supervised graph representation learning and unsupervised graph representation learning according to the different learning processes. The most advanced graph representation learning methods usually adopt supervised models based on graph neural networks, or unsupervised models based on random walks or autoencoders. Supervised models require a large amount of reliable label information in the data; unsupervised models do not have strict requirements on the data set, but the accuracy of the information expressed by the learned embedded representation is lower than that of the supervised model.
在现实世界中,获取足够多的可靠的大规模数据集的标签数据是极其困难的,因此使用无监督模型实现图表示学习是更为理想的方案。现有的无监督方法已经取得了一些成果,但在嵌入表示的准确性方面仍落后于监督方法,这导致其在缺乏标签的实际应用场景,如生命安全、风险预测等重要领域的应用受到了限制。因此,开发一种新的无监督图表示学习方法,能够准确地表达信息的嵌入表示,具有重要的实际意义。In the real world, it is extremely difficult to obtain enough reliable labeled data for large-scale data sets, so using an unsupervised model to implement graph representation learning is a more ideal solution. Existing unsupervised methods have achieved some results, but they still lag behind supervised methods in terms of the accuracy of embedded representations, which limits their application in practical application scenarios that lack labels, such as life safety, risk prediction and other important fields. Therefore, it is of great practical significance to develop a new unsupervised graph representation learning method that can accurately express the embedded representation of information.
发明内容Summary of the invention
为了克服现有无监督图表示学习方法得到的嵌入表示的表达信息的准确率较低的问题,本发明提供一种基于生成对抗网络的无监督图表示学习方法和装置。In order to overcome the problem that the accuracy of the expression information of the embedded representation obtained by the existing unsupervised graph representation learning method is low, the present invention provides an unsupervised graph representation learning method and device based on a generative adversarial network.
本发明利用图结构,通过LLE(局部线性嵌入)方法将原始数据转换为低维预嵌入,并使用生成对抗模型,进行嵌入表示的学习和优化,从而实现图表示学习。The present invention utilizes graph structure, converts original data into low-dimensional pre-embedding through LLE (local linear embedding) method, and uses generative adversarial model to learn and optimize embedded representation, thereby realizing graph representation learning.
基于生成对抗网络的无监督图表示学习方法,分为预嵌入生成阶段和生成对抗阶段;具体步骤如下:The unsupervised graph representation learning method based on generative adversarial networks is divided into a pre-embedding generation phase and a generative adversarial phase. The specific steps are as follows:
步骤1:预嵌入生成阶段;使用LLE降维方法对原始数据的特征进行压缩,记录降维后的结果作为预嵌入;Step 1: Pre-embedding generation stage: Use the LLE dimension reduction method to compress the features of the original data and record the result after dimension reduction as pre-embedding;
1.1对初始图节点的原始特征矩阵使用KNN算法找到每个节点样本的K个最近邻居,通过每个节点的K个邻居节点重构本节点,计算权重Aij,重构误差如公式(1)所示:1.1 Original feature matrix of initial graph nodes Use the KNN algorithm to find the K nearest neighbors of each node sample, reconstruct the node through the K neighbor nodes of each node, calculate the weight A ij , and the reconstruction error is shown in formula (1):
其中N表示节点数量,d0表示原始特征向量的维度,权重Aij表示第j个数据点对第i次重建的贡献。为了计算权重,需要在两个约束条件下最小化成本函数:首先,每个节点向量xi只能通过邻居节点重建,若节点j不在邻居集合内则Aij=0;其次,权重矩阵的行的和为一,即∑jAij=1。Where N represents the number of nodes, d0 represents the dimension of the original feature vector, and the weight Aij represents the contribution of the jth data point to the i-th reconstruction. In order to calculate the weight, it is necessary to minimize the cost function under two constraints: first, each node vector xi can only be reconstructed through neighboring nodes, and if node j is not in the neighbor set, Aij = 0; second, the sum of the rows of the weight matrix is one, that is, ∑jAij = 1.
1.2通过公式(1)获取的最佳重建权重矩阵A,计算降维后的嵌入矩阵且满足d<<d0,嵌入损失如公式(2)所示:1.2 The optimal reconstruction weight matrix A obtained by formula (1) is used to calculate the embedding matrix after dimensionality reduction And satisfying d<<d 0 , the embedding loss is shown in formula (2):
训练过程中权重矩阵A固定不变,通过公式(2)获取嵌入矩阵结果。During the training process, the weight matrix A remains unchanged, and the embedding matrix result is obtained through formula (2).
步骤2:生成对抗阶段;通过一个生成对抗网络模型,具体地学习节点的嵌入表示;生成对抗网络由生成器和鉴别器两部分构成,二者内部各自有一个节点嵌入层,基于对抗性学习的思想,互相推动对方优化节点的嵌入表示;Step 2: Generative adversarial phase: Through a generative adversarial network model, the embedded representation of the node is specifically learned; the generative adversarial network consists of two parts: the generator and the discriminator. Each of them has a node embedding layer. Based on the idea of adversarial learning, they push each other to optimize the embedded representation of the node;
2.1训练开始前,对生成器和鉴别器进行初始化,将上一步骤得到的预嵌入矩阵作为各自节点嵌入层的初始值;2.1 Before training begins, initialize the generator and discriminator, and use the pre-embedding matrix obtained in the previous step as the initial value of their respective node embedding layers;
2.2节点对采样;从节点i出发,通过基于邻接权重的随机游走,可以得到一条路径Pathi;其中,节点i的邻接权重/>是一个N维向量,N表示节点数量;/>在第j维上的分量/>的计算公式如公式(3)(4)所示:2.2 Node pair sampling: Starting from node i, based on the adjacency weight A random walk can get a path Path i ; where the adjacency weight of node i/> is an N-dimensional vector, where N represents the number of nodes; /> Component in the jth dimension/> The calculation formula is shown in formula (3) (4):
其中,表示节点i的邻居节点的集合,/>表示节点i在生成器G中的嵌入表示,ZG表示生成器G中节点嵌入层;in, represents the set of neighbor nodes of node i,/> represents the embedding representation of node i in the generator G, Z G represents the node embedding layer in the generator G;
Pathi是以节点i为起始节点随机游走时经过的所有节点的集合;对于小规模数据集,当游走的下一节点已经存在Pathi中时,停止游走;对于大规模数据集,在游走达到一定的步数时停止;Path i is the set of all nodes that are passed through when randomly walking with node i as the starting node; for small-scale data sets, when the next node of the walk already exists in Path i , the walk stops; for large-scale data sets, the walk stops when a certain number of steps are reached;
是用于生成器训练的关于节点i的节点对集合,Pathi中的每对相邻节点构成一个节点对加入/>是用于鉴别器训练的关于节点i的节点对集合,Pathi中的头尾节点构成一个节点对加入/>通过从节点i出发进行多次随机游走,/>和/>得到足够数量的节点对用于后续步骤训练; is a set of node pairs about node i used for generator training. Each pair of adjacent nodes in Path i constitutes a node pair. is a node pair set about node i for discriminator training. The head and tail nodes in Path i form a node pair and are added/> By performing multiple random walks starting from node i, /> and/> Get a sufficient number of node pairs for subsequent training steps;
2.3鉴别器训练;使用Adam算法最小化鉴别器损失函数LD,优化鉴别器D的节点嵌入层ZD;鉴别器目标函数计算公式如公式(5)所示:2.3 Discriminator training: Use the Adam algorithm to minimize the discriminator loss function LD and optimize the node embedding layer ZD of the discriminator D. The discriminator objective function calculation formula is shown in formula (5):
其中,表示节点i在鉴别器D中的嵌入表示,<h,t>表示由头节点h和尾节点t构成的节点对,norm是归一化函数;in, represents the embedding representation of node i in the discriminator D, <h, t> represents the node pair consisting of the head node h and the tail node t, and norm is the normalization function;
2.4生成器训练;使用Adam算法最小化鉴别器损失函数LG,优化生成器G的节点嵌入层ZG;生成器目标函数计算公式如公式(6)所示:2.4 Generator training; Use the Adam algorithm to minimize the discriminator loss function L G and optimize the node embedding layer Z G of the generator G; The generator objective function calculation formula is shown in formula (6):
步骤3:多次执行生成对抗阶段,直至生成对抗模型收敛;此时,模型中的节点嵌入层ZG和ZD即为最终学习到的图节点的嵌入表示矩阵。Step 3: Execute the generative adversarial phase multiple times until the generative adversarial model converges; at this point, the node embedding layers Z G and Z D in the model are the embedding representation matrices of the final learned graph nodes.
本发明的第二个方面涉及基于生成对抗网络的无监督图表示学习装置,包括存储器和一个或多个处理器,所述存储器中存储有可执行代码,所述一个或多个处理器执行所述可执行代码时,用于实现本发明的基于生成对抗网络的无监督图表示学习方法。The second aspect of the present invention relates to an unsupervised graph representation learning device based on a generative adversarial network, comprising a memory and one or more processors, wherein the memory stores executable code, and when the one or more processors execute the executable code, they are used to implement the unsupervised graph representation learning method based on a generative adversarial network of the present invention.
本发明的第三个方面涉及一种计算机可读存储介质,其上存储有程序,该程序被处理器执行时,实现本发明的基于生成对抗网络的无监督图表示学习方法。The third aspect of the present invention relates to a computer-readable storage medium having a program stored thereon, which, when executed by a processor, implements the unsupervised graph representation learning method based on a generative adversarial network of the present invention.
本发明综合上述技术提出了基于生成对抗网络的无监督图表示学习方法;为了解决现有无监督图表示学习方法得到的嵌入表示表达信息的准确率较低的问题,使用基于邻居权重随机游走获取节点正负样本对的生成器和负责分辨节点对之间是否存在真实连接的鉴别器结构,以对抗的方式自动学习优化节点嵌入;此外,为了解决预嵌入对生成对抗模型训练影响较大的问题,采用LLE降维技术获取高质量的预嵌入。The present invention combines the above technologies to propose an unsupervised graph representation learning method based on a generative adversarial network; in order to solve the problem that the embedding representation obtained by the existing unsupervised graph representation learning method has low accuracy in expressing information, a generator for obtaining node positive and negative sample pairs based on neighbor weighted random walks and a discriminator structure responsible for distinguishing whether there is a real connection between node pairs are used to automatically learn and optimize node embedding in an adversarial manner; in addition, in order to solve the problem that pre-embedding has a large impact on the training of the generative adversarial model, the LLE dimensionality reduction technology is used to obtain high-quality pre-embedding.
本发明的优点是:(1)LLE降维产生高质量的预嵌入,保证了GAN模型的有效优化。(2)生成对抗模型选择节点对的连接是否存在作为对抗点,使用随机游走采样替代随机生成节点嵌入的策略,有效去除冗余信息。(3)作为无监督的图表示学习方法,对数据集没有严格的要求,学习到的嵌入表示在信息表达准确性上持平甚至超越了监督的学习方法。The advantages of the present invention are: (1) LLE dimensionality reduction produces high-quality pre-embeddings, ensuring effective optimization of the GAN model. (2) The generative adversarial model selects the existence of a node pair connection as the adversarial point, and uses random walk sampling instead of the strategy of randomly generating node embeddings to effectively remove redundant information. (3) As an unsupervised graph representation learning method, there are no strict requirements for the data set, and the learned embedding representation is equal to or even exceeds the supervised learning method in terms of information expression accuracy.
附图说明BRIEF DESCRIPTION OF THE DRAWINGS
图1是本发明方法的整体流程图。FIG. 1 is an overall flow chart of the method of the present invention.
图2是本发明方法的具体说明图。FIG. 2 is a diagram specifically illustrating the method of the present invention.
具体实施方式Detailed ways
下面结合实施例来说明本发明的具体实施方式。The specific implementation of the present invention is described below with reference to embodiments.
实施例1Example 1
本实施例涉及应用本发明的基于生成对抗网络的无监督图表示学习方法的文献分类和文献推荐方法。This embodiment relates to a document classification and document recommendation method using the unsupervised graph representation learning method based on a generative adversarial network of the present invention.
假设我们对Citeseer引文数据集进行图表示学习。该数据集有3327个节点和4552条边,每个节点的特征维度为3703维。Suppose we perform graph representation learning on the Citeseer citation dataset. This dataset has 3327 nodes and 4552 edges, and the feature dimension of each node is 3703 dimensions.
使用基于生成对抗网络的迭代聚合图表示学习方法对Citeseer数据集进行图表示学习,步骤如下:The graph representation learning method based on generative adversarial network is used to learn the graph representation of Citeseer dataset. The steps are as follows:
1获取预嵌入;1Get pre-embedded;
1.1对初始图节点维度为3327×3703的原始特征矩阵X,使用KNN算法找到每个节点样本的12个最近邻居,通过每个节点的12个邻居节点重构本节点,计算最佳重构时的权重矩阵A,权重矩阵A的维度为3327×3327。1.1 For the original feature matrix X with the initial graph node dimension of 3327×3703, the KNN algorithm is used to find the 12 nearest neighbors of each node sample, and the node is reconstructed through the 12 neighbor nodes of each node. The weight matrix A for the optimal reconstruction is calculated, and the dimension of the weight matrix A is 3327×3327.
1.2固定1.1获取的最佳重建权重矩阵A,使用公式(2)计算降维后维度为3327×64的嵌入矩阵Z。1.2 Fix the optimal reconstruction weight matrix A obtained in 1.1 and use formula (2) to calculate the embedding matrix Z with a dimension of 3327×64 after dimensionality reduction.
2对抗性训练;2. Adversarial training;
2.1初始化生成对抗网络。2.1 Initialize the Generative Adversarial Network.
(1)初始化生成器嵌入矩阵ZG,维度为3327×64,初始值为上一步骤的预嵌入矩阵的值。(1) Initialize the generator embedding matrix Z G with a dimension of 3327 × 64 and an initial value equal to the value of the pre-embedding matrix in the previous step.
(2)初始化鉴别器嵌入矩阵ZD,维度为3327×64,初始值为上一步骤的预嵌入矩阵的值。(2) Initialize the discriminator embedding matrix Z D with a dimension of 3327 × 64 and the initial value is the value of the pre-embedding matrix in the previous step.
2.2获取训练样本。2.2 Obtain training samples.
2.3由节点对集合SampleD和嵌入矩阵ZD计算鉴别器损失函数LD。2.3 Calculate the discriminator loss function LD from the node pair set Sample D and the embedding matrix ZD .
2.4使用Adam算法最小化鉴别器损失函数LD,优化鉴别器嵌入矩阵ZD。2.4 Use the Adam algorithm to minimize the discriminator loss function LD and optimize the discriminator embedding matrix ZD .
2.5由节点对集合SampleG、嵌入矩阵ZD和嵌入矩阵ZG计算生成器损失函数LG。2.5 Calculate the generator loss function LG from the node pair set SampleG , the embedding matrix ZD and the embedding matrix ZG .
2.6使用Adam算法最小化生成器损失函数LG,优化生成器嵌入矩阵ZG。2.6 Use the Adam algorithm to minimize the generator loss function LG and optimize the generator embedding matrix ZG .
3重复训练,获取节点嵌入表示;3. Repeat the training to obtain the node embedding representation;
3.1以步骤2.2~2.6为一次完整迭代过程,重复迭代,直至迭代200次或损失函数LG、LD的值不在显著减少。整个迭代过程中模型内部嵌入矩阵的变化如图2所示。3.1 Take steps 2.2 to 2.6 as a complete iteration process, and repeat the iteration until 200 iterations or the values of the loss functions LG and LD are no longer significantly reduced. The changes in the internal embedding matrix of the model during the entire iteration process are shown in Figure 2.
3.2大小为3327×64的嵌入表示矩阵ZD和ZG,即为Citeseer数据集节点的64维嵌入表示结果。3.2 The embedding representation matrices Z D and Z G of size 3327×64 are the 64-dimensional embedding representation results of the Citeseer dataset nodes.
4输入新增的文献特征至模型,实现文献主题分类和相关文献推荐;4 Input the newly added document features into the model to achieve document subject classification and related document recommendation;
当引文数据库中新增新发表的文献时,将新构建的图网络输入至训练好的模型中,获得文献节点的嵌入结果,利用文献之间的节点嵌入相似性进行文献的主题分类,并为研究人员提供高质量的相关文献推荐,帮助他们发现目标领域的最新研究成果。When a newly published document is added to the citation database, the newly constructed graph network is input into the trained model to obtain the embedding results of the document nodes. The subject classification of the documents is carried out using the node embedding similarity between the documents, and high-quality related document recommendations are provided to researchers to help them discover the latest research results in the target field.
使用基于生成对抗网络的迭代聚合图表示学习方法对Citeseer数据集进行图表示学习的具体实施步骤至此结束。This concludes the specific implementation steps of graph representation learning for the Citeseer dataset using the iterative aggregation graph representation learning method based on generative adversarial networks.
实施例2Example 2
本实施例涉及基于生成对抗网络的无监督图表示学习装置,包括存储器和一个或多个处理器,所述存储器中存储有可执行代码,所述一个或多个处理器执行所述可执行代码时,用于实现实施例1的方法。This embodiment relates to an unsupervised graph representation learning device based on a generative adversarial network, including a memory and one or more processors, wherein the memory stores executable code, and when the one or more processors execute the executable code, they are used to implement the method of Example 1.
实施例3Example 3
本实施例涉及一种计算机可读存储介质,其上存储有程序,该程序被处理器执行时,实现实施例1的方法。This embodiment relates to a computer-readable storage medium on which a program is stored. When the program is executed by a processor, the method of Embodiment 1 is implemented.
本说明书实施例所述的内容仅仅是对发明构思的实现形式的列举,本发明的保护范围不应当被视为仅限于实施例所陈述的具体形式,本发明的保护范围也及于本领域技术人员根据本发明构思所能够想到的等同技术手段。The contents described in the embodiments of this specification are merely an enumeration of the implementation forms of the inventive concept. The protection scope of the present invention should not be regarded as limited to the specific forms described in the embodiments. The protection scope of the present invention also extends to equivalent technical means that can be conceived by those skilled in the art based on the inventive concept.
Claims (3)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311632050.7A CN117875374B (en) | 2023-12-01 | 2023-12-01 | A graph representation learning method and device based on generative adversarial network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311632050.7A CN117875374B (en) | 2023-12-01 | 2023-12-01 | A graph representation learning method and device based on generative adversarial network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117875374A true CN117875374A (en) | 2024-04-12 |
CN117875374B CN117875374B (en) | 2025-04-04 |
Family
ID=90589052
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311632050.7A Active CN117875374B (en) | 2023-12-01 | 2023-12-01 | A graph representation learning method and device based on generative adversarial network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117875374B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118446277A (en) * | 2024-05-20 | 2024-08-06 | 浙江工业大学 | Self-supervised message passing graph representation learning method based on contrastive generation |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108460391A (en) * | 2018-03-09 | 2018-08-28 | 西安电子科技大学 | Based on the unsupervised feature extracting method of high spectrum image for generating confrontation network |
CN111210002A (en) * | 2019-12-30 | 2020-05-29 | 北京航空航天大学 | A method and system for multi-layer academic network community discovery based on generative adversarial network model |
US20220101103A1 (en) * | 2020-09-25 | 2022-03-31 | Royal Bank Of Canada | System and method for structure learning for graph neural networks |
CN116263785A (en) * | 2022-11-16 | 2023-06-16 | 中移(苏州)软件技术有限公司 | Training method, classification method and device for cross-domain text classification model |
-
2023
- 2023-12-01 CN CN202311632050.7A patent/CN117875374B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108460391A (en) * | 2018-03-09 | 2018-08-28 | 西安电子科技大学 | Based on the unsupervised feature extracting method of high spectrum image for generating confrontation network |
CN111210002A (en) * | 2019-12-30 | 2020-05-29 | 北京航空航天大学 | A method and system for multi-layer academic network community discovery based on generative adversarial network model |
US20220101103A1 (en) * | 2020-09-25 | 2022-03-31 | Royal Bank Of Canada | System and method for structure learning for graph neural networks |
CN116263785A (en) * | 2022-11-16 | 2023-06-16 | 中移(苏州)软件技术有限公司 | Training method, classification method and device for cross-domain text classification model |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118446277A (en) * | 2024-05-20 | 2024-08-06 | 浙江工业大学 | Self-supervised message passing graph representation learning method based on contrastive generation |
Also Published As
Publication number | Publication date |
---|---|
CN117875374B (en) | 2025-04-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112966114B (en) | Literature classification method and device based on symmetrical graph convolutional neural network | |
CN109919183B (en) | A kind of image recognition method, device, device and storage medium based on small sample | |
CN111104522A (en) | Regional industry association effect trend prediction method based on knowledge graph | |
US20200167659A1 (en) | Device and method for training neural network | |
CN111078911B (en) | An unsupervised hashing method based on autoencoder | |
CN113010683B (en) | Entity relationship recognition method and system based on improved graph attention network | |
CN113065649A (en) | A complex network topology graph representation learning method, prediction method and server | |
CN108108854A (en) | City road network link prediction method, system and storage medium | |
CN112036512A (en) | Image classification neural network architecture search method and device based on network cropping | |
Wang et al. | Graph neural networks: Self-supervised learning | |
CN112925909B (en) | Graph convolution literature classification method and system considering local invariance constraint | |
JP7172612B2 (en) | Data expansion program, data expansion method and data expansion device | |
CN113361627A (en) | Label perception collaborative training method for graph neural network | |
CN112966165A (en) | Interactive community searching method and device based on graph neural network | |
CN111723930A (en) | A system for applying crowd intelligence supervised learning method | |
Ying et al. | Unsupervised generative feature transformation via graph contrastive pre-training and multi-objective fine-tuning | |
CN113326884B (en) | Efficient learning method and device for large-scale heterograph node representation | |
CN117875374A (en) | Graph representation learning method and device based on generation of countermeasure network | |
CN114896434A (en) | Hash code generation method and device based on center similarity learning | |
CN112465226A (en) | User behavior prediction method based on feature interaction and graph neural network | |
CN115995293A (en) | Circular RNA and disease association prediction method | |
CN115563315A (en) | An active complex relation extraction method for continuous few-shot learning | |
CN110717402A (en) | Pedestrian re-identification method based on hierarchical optimization metric learning | |
CN117893839B (en) | Multi-label classification method and system based on graph attention mechanism | |
Vergés et al. | Classification using Hyperdimensional Computing: A~ Review~ with~ Comparative Analysis |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant |