CN112000988B - Factor decomposition machine regression model construction method, device and readable storage medium - Google Patents
Factor decomposition machine regression model construction method, device and readable storage medium Download PDFInfo
- Publication number
- CN112000988B CN112000988B CN202010893497.XA CN202010893497A CN112000988B CN 112000988 B CN112000988 B CN 112000988B CN 202010893497 A CN202010893497 A CN 202010893497A CN 112000988 B CN112000988 B CN 112000988B
- Authority
- CN
- China
- Prior art keywords
- secret sharing
- shared
- party
- parameter
- model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/60—Protecting data
- G06F21/62—Protecting access to data via a platform, e.g. using keys or access control rules
- G06F21/6218—Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
- G06F21/6245—Protecting personal data, e.g. for financial or medical purposes
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/02—Marketing; Price estimation or determination; Fundraising
- G06Q30/0241—Advertisements
- G06Q30/0251—Targeted advertisements
- G06Q30/0255—Targeted advertisements based on user history
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/02—Marketing; Price estimation or determination; Fundraising
- G06Q30/0241—Advertisements
- G06Q30/0251—Targeted advertisements
- G06Q30/0269—Targeted advertisements based on user profile or attribute
- G06Q30/0271—Personalized advertisement
Landscapes
- Engineering & Computer Science (AREA)
- Business, Economics & Management (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Strategic Management (AREA)
- Finance (AREA)
- Development Economics (AREA)
- Accounting & Taxation (AREA)
- Physics & Mathematics (AREA)
- Software Systems (AREA)
- Marketing (AREA)
- Health & Medical Sciences (AREA)
- General Business, Economics & Management (AREA)
- General Engineering & Computer Science (AREA)
- Economics (AREA)
- Medical Informatics (AREA)
- Entrepreneurship & Innovation (AREA)
- General Health & Medical Sciences (AREA)
- Game Theory and Decision Science (AREA)
- Bioethics (AREA)
- Databases & Information Systems (AREA)
- Computer Hardware Design (AREA)
- Computer Security & Cryptography (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Storage Device Security (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
Description
技术领域Technical Field
本申请涉及金融科技(Fintech)的人工智能领域,尤其涉及一种因子分解机回归模型构建方法、设备及可读存储介质。The present application relates to the field of artificial intelligence in financial technology (Fintech), and in particular to a factor decomposition machine regression model construction method, device and readable storage medium.
背景技术Background Art
随着金融科技,尤其是互联网科技金融的不断发展,越来越多的技术(如分布式、区块链Blockchain、人工智能等)应用在金融领域,但金融业也对技术提出了更高的要求,如对金融业对应待办事项的分发也有更高的要求。With the continuous development of financial technology, especially Internet technology finance, more and more technologies (such as distributed, blockchain, artificial intelligence, etc.) are applied in the financial field, but the financial industry also puts forward higher requirements on technology, such as higher requirements on the distribution of corresponding to-do items in the financial industry.
随着计算机软件和人工智能的不断发展,联邦学习的应用领域也越来越广泛,目前,纵向联邦学习建模通常采用不加密的两方联邦学习方法或者同态加密的两方纵向联邦学习建模方法进行回归模型的构建,但是,对于不加密的两方联邦学习方法存在数据泄露风险,无法保护纵向联邦学习建模的各参与方的数据隐私,而对于同态加密的两方纵向联邦学习建模方法,需要一个第三方来生成密钥对,提供加密与解密服务,则必需要求第三方可信,若第三方不可信或者可信度较低,则仍然具有泄露数据的风险,纵向联邦学习建模的各参与方的数据隐私仍然得不到保护。With the continuous development of computer software and artificial intelligence, the application field of federated learning is becoming more and more extensive. At present, vertical federated learning modeling usually adopts an unencrypted two-party federated learning method or a homomorphically encrypted two-party longitudinal federated learning modeling method to construct a regression model. However, the unencrypted two-party federated learning method has the risk of data leakage and cannot protect the data privacy of the participants in the longitudinal federated learning modeling. For the homomorphically encrypted two-party longitudinal federated learning modeling method, a third party is required to generate a key pair and provide encryption and decryption services. The third party must be trustworthy. If the third party is not trustworthy or has low credibility, there is still a risk of data leakage, and the data privacy of the participants in the longitudinal federated learning modeling is still not protected.
发明内容Summary of the invention
本申请的主要目的在于提供一种因子分解机回归模型构建方法、设备及可读存储介质,旨在解决现有技术中基于纵向联邦学习建模构建回归模型时无法保护各参与方的数据隐私的技术问题。The main purpose of this application is to provide a factor decomposition machine regression model construction method, device and readable storage medium, aiming to solve the technical problem in the prior art that the data privacy of each participant cannot be protected when constructing a regression model based on vertical federated learning modeling.
为实现上述目的,本申请提供一种因子分解机回归模型构建方法,所述因子分解机回归模型构建方法应用于因子分解机回归模型构建设备,所述因子分解机回归模型构建方法包括:To achieve the above-mentioned purpose, the present application provides a factor decomposition machine regression model construction method, which is applied to a factor decomposition machine regression model construction device, and the factor decomposition machine regression model construction method includes:
与第二设备进行秘密共享,获得秘密共享模型参数和秘密共享训练数据;Perform secret sharing with the second device to obtain secret sharing model parameters and secret sharing training data;
基于所述秘密共享训练数据和所述秘密共享模型参数,与所述第二设备进行纵向联邦学习建模,计算秘密共享回归误差;Based on the secret sharing training data and the secret sharing model parameters, perform longitudinal federated learning modeling with the second device to calculate the secret sharing regression error;
基于所述秘密共享回归误差,确定第一目标回归模型参数,并协助所述第二设备确定第二目标回归模型参数,以构建纵向联邦因子分解机回归模型。Based on the secret shared regression error, first target regression model parameters are determined, and the second device is assisted in determining second target regression model parameters to construct a longitudinal federated factorization machine regression model.
本申请还提供一种个性化推荐方法,所述个性化推荐方法应用于个性化推荐设备,所述个性化推荐方法包括:The present application also provides a personalized recommendation method, which is applied to a personalized recommendation device, and the personalized recommendation method includes:
与第二设备进行秘密共享,获得秘密共享待推荐用户数据和秘密共享模型参数;Perform secret sharing with the second device to obtain secret sharing to-be-recommended user data and secret sharing model parameters;
将所述秘密共享待推荐用户数据输入预设评分模型,以基于所述秘密共享模型参数,对所述秘密共享待推荐用户数据对应的待推荐物品进行评分,获得第一秘密共享评分结果;Inputting the secret sharing user data to be recommended into a preset scoring model to score the recommended items corresponding to the secret sharing user data to be recommended based on the secret sharing model parameters to obtain a first secret sharing scoring result;
基于所述第一秘密共享评分结果,与所述第二设备进行联邦交互,以联合所述第二设备确定的第二秘密共享评分结果,计算目标评分;Based on the first secret sharing scoring result, perform a federation interaction with the second device to calculate a target score in conjunction with the second secret sharing scoring result determined by the second device;
基于所述目标评分,生成所述待推荐物品对应的目标推荐列表。Based on the target scores, a target recommendation list corresponding to the items to be recommended is generated.
本申请还提供一种因子分解机回归模型构建装置,所述因子分解机回归模型构建装置为虚拟装置,且所述因子分解机回归模型构建装置应用于因子分解机回归模型构建设备,所述因子分解机回归模型构建装置包括:The present application also provides a factor decomposition machine regression model construction device, the factor decomposition machine regression model construction device is a virtual device, and the factor decomposition machine regression model construction device is applied to a factor decomposition machine regression model construction device, and the factor decomposition machine regression model construction device includes:
秘密共享模块,用于与第二设备进行秘密共享,获得秘密共享模型参数和秘密共享训练数据;A secret sharing module, used to perform secret sharing with the second device and obtain secret sharing model parameters and secret sharing training data;
纵向联邦模块,用于基于所述秘密共享训练数据和所述秘密共享模型参数,与所述第二设备进行纵向联邦学习建模,计算秘密共享回归误差;A vertical federation module, configured to perform vertical federated learning modeling with the second device based on the secret sharing training data and the secret sharing model parameters, and calculate a secret sharing regression error;
确定模块,用于基于所述秘密共享回归误差,确定第一目标回归模型参数,并协助所述第二设备确定第二目标回归模型参数,以构建纵向联邦因子分解机回归模型。A determination module is used to determine the first target regression model parameters based on the secret shared regression error, and assist the second device in determining the second target regression model parameters to construct a longitudinal federated factor decomposition machine regression model.
本申请还提供一种个性化推荐装置,所述个性化推荐装置为虚拟装置,且所述个性化推荐装置应用于个性化推荐设备,所述个性化推荐装置包括:The present application also provides a personalized recommendation device, which is a virtual device and is applied to a personalized recommendation device. The personalized recommendation device includes:
秘密共享模块,用于与第二设备进行秘密共享,获得秘密共享待推荐用户数据和秘密共享模型参数;A secret sharing module, used to perform secret sharing with the second device, and obtain secret sharing recommended user data and secret sharing model parameters;
评分模块,用于将所述秘密共享待推荐用户数据输入预设评分模型,以基于所述秘密共享模型参数,对所述秘密共享待推荐用户数据对应的待推荐物品进行评分,获得第一秘密共享评分结果;A scoring module, configured to input the secret sharing user data to be recommended into a preset scoring model, so as to score the recommended items corresponding to the secret sharing user data to be recommended based on the secret sharing model parameters, and obtain a first secret sharing scoring result;
计算模块,用于基于所述第一秘密共享评分结果,与所述第二设备进行联邦交互,以联合所述第二设备确定的第二秘密共享评分结果,计算目标评分;a calculation module, configured to perform a federation interaction with the second device based on the first secret sharing score result, so as to calculate a target score in conjunction with a second secret sharing score result determined by the second device;
生成模块,用于基于所述目标评分,生成所述待推荐物品对应的目标推荐列表。A generation module is used to generate a target recommendation list corresponding to the to-be-recommended items based on the target scores.
本申请还提供一种因子分解机回归模型构建设备,所述因子分解机回归模型构建设备为实体设备,所述因子分解机回归模型构建设备包括:存储器、处理器以及存储在所述存储器上并可在所述处理器上运行的所述因子分解机回归模型构建方法的程序,所述因子分解机回归模型构建方法的程序被处理器执行时可实现如上述的因子分解机回归模型构建方法的步骤。The present application also provides a factor decomposition machine regression model construction device, which is a physical device. The factor decomposition machine regression model construction device includes: a memory, a processor, and a program of the factor decomposition machine regression model construction method stored in the memory and executable on the processor. When the program of the factor decomposition machine regression model construction method is executed by the processor, the steps of the factor decomposition machine regression model construction method as described above can be implemented.
本申请还提供一种个性化推荐设备,所述个性化推荐设备为实体设备,所述个性化推荐设备包括:存储器、处理器以及存储在所述存储器上并可在所述处理器上运行的所述个性化推荐方法的程序,所述个性化推荐方法的程序被处理器执行时可实现如上述的个性化推荐方法的步骤。The present application also provides a personalized recommendation device, which is a physical device, comprising: a memory, a processor, and a program of the personalized recommendation method stored in the memory and executable on the processor. When the program of the personalized recommendation method is executed by the processor, the steps of the personalized recommendation method as described above can be implemented.
本申请还提供一种可读存储介质,所述可读存储介质上存储有实现因子分解机回归模型构建方法的程序,所述因子分解机回归模型构建方法的程序被处理器执行时实现如上述的因子分解机回归模型构建方法的步骤。The present application also provides a readable storage medium, on which is stored a program for implementing a factor decomposition machine regression model construction method. When the program for implementing a factor decomposition machine regression model construction method is executed by a processor, the steps of the factor decomposition machine regression model construction method as described above are implemented.
本申请还提供一种可读存储介质,所述可读存储介质上存储有实现个性化推荐方法的程序,所述个性化推荐方法的程序被处理器执行时实现如上述的个性化推荐方法的步骤。The present application also provides a readable storage medium, on which is stored a program for implementing the personalized recommendation method. When the program of the personalized recommendation method is executed by a processor, the steps of the personalized recommendation method as described above are implemented.
本申请提供了一种因子分解机回归模型构建方法、设备和可读存储介质,相比于现有技术采用的基于不加密的两方联邦学习方法或者同态加密的两方纵向联邦学习建模方法构建回归模型的技术手段,本申请通过与第二设备进行秘密共享,获得秘密共享模型参数和秘密共享训练数据,进而基于所述秘密共享训练数据和所述秘密共享模型参数,与所述第二设备进行纵向联邦学习建模,计算秘密共享回归误差,进而基于所述秘密共享回归误差,对所述秘密共享模型参数进行更新,获得秘密共享回归模型更新参数,其中,在与第二设备进行交互时,发送或者接收的数据均为秘密共享数据,且无需第三方生成的公私密钥进行数据的加密,所有的数据传输过程均在参与纵向联邦学习建模的两方之间进行,保护了数据的隐私性,进而基于所述秘密共享回归模型更新参数,通过与所述第二设备进行解密交互,即可确定第一目标回归模型参数,并协助第二设备确定第二目标回归模型参数,即可完成纵向联邦因子分解机回归模型的构建,克服了现有技术中采用基于不加密的两方联邦学习方法或者同态加密的两方纵向联邦学习建模方法构建回归模型,导致无法保护纵向联邦学习建模的各参与方的数据隐私的技术缺陷,所以,解决了基于纵向联邦学习建模构建回归模型时无法保护各参与方的数据隐私的技术问题。The present application provides a factorization machine regression model construction method, device and readable storage medium. Compared with the technical means of constructing a regression model based on an unencrypted two-party federated learning method or a homomorphically encrypted two-party longitudinal federated learning modeling method adopted in the prior art, the present application obtains secret sharing model parameters and secret sharing training data by secret sharing with a second device, and then performs longitudinal federated learning modeling with the second device based on the secret sharing training data and the secret sharing model parameters, calculates the secret sharing regression error, and then updates the secret sharing model parameters based on the secret sharing regression error to obtain the secret sharing regression model update parameters, wherein, when interacting with the second device, the data sent or received are all secret sharing data, and no third party generation is required. The public-private key is used to encrypt the data, and all data transmission processes are carried out between the two parties participating in the vertical federated learning modeling, which protects the privacy of the data. Then, the parameters of the secret shared regression model are updated based on the secret shared regression model. The first target regression model parameters can be determined by decrypting interaction with the second device, and the second device is assisted in determining the second target regression model parameters, so as to complete the construction of the vertical federated factor decomposition machine regression model. This overcomes the technical defect of the prior art that the regression model is constructed based on an unencrypted two-party federated learning method or a homomorphic encrypted two-party vertical federated learning modeling method, which makes it impossible to protect the data privacy of each participant in the vertical federated learning modeling. Therefore, the technical problem that the data privacy of each participant cannot be protected when the regression model is constructed based on the vertical federated learning modeling is solved.
附图说明BRIEF DESCRIPTION OF THE DRAWINGS
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本申请的实施例,并与说明书一起用于解释本申请的原理。The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and, together with the description, serve to explain the principles of the present application.
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,对于本领域普通技术人员而言,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings required for use in the embodiments or the description of the prior art will be briefly introduced below. Obviously, for ordinary technicians in this field, other drawings can be obtained based on these drawings without paying any creative labor.
图1为本申请因子分解机回归模型构建方法第一实施例的流程示意图;FIG1 is a flow chart of a first embodiment of a method for constructing a factor decomposition machine regression model of the present invention;
图2为本申请因子分解机回归模型构建方法第二实施例的流程示意图;FIG2 is a flow chart of a second embodiment of a method for constructing a factor decomposition machine regression model according to the present invention;
图3为本申请个性化推荐方法第三实施例的流程示意图;FIG3 is a flowchart of a third embodiment of the personalized recommendation method of the present application;
图4为本申请实施例因子分解机回归模型构建方法涉及的硬件运行环境的设备结构示意图;FIG4 is a schematic diagram of the device structure of the hardware operating environment involved in the factor decomposition machine regression model construction method according to an embodiment of the present application;
图5为本申请实施例个性化推荐方法涉及的硬件运行环境的设备结构示意图。FIG5 is a schematic diagram of the device structure of the hardware operating environment involved in the personalized recommendation method according to an embodiment of the present application.
本申请目的实现、功能特点及优点将结合实施例,参照附图做进一步说明。The purpose, features and advantages of this application will be further described in conjunction with the embodiments and with reference to the accompanying drawings.
具体实施方式DETAILED DESCRIPTION
应当理解,此处所描述的具体实施例仅用以解释本申请,并不用于限定本申请。It should be understood that the specific embodiments described herein are only used to explain the present application and are not used to limit the present application.
本申请实施例提供一种因子分解机回归模型构建方法,在本申请因子分解机回归模型构建方法的第一实施例中,参照图1,所述因子分解机回归模型构建方法应用于第一设备,所述因子分解机回归模型构建方法包括:The present application embodiment provides a factor decomposition machine regression model construction method. In the first embodiment of the factor decomposition machine regression model construction method of the present application, referring to FIG1 , the factor decomposition machine regression model construction method is applied to a first device, and the factor decomposition machine regression model construction method includes:
步骤S10,与第二设备进行秘密共享,获得秘密共享模型参数和秘密共享训练数据;Step S10, performing secret sharing with the second device to obtain secret sharing model parameters and secret sharing training data;
在本实施例中,需要说明的是,所述第一设备和所述第二设备均为纵向联邦学习的参与方,所述第一设备拥有带有样本标签的第一方训练标签数据,所述第一方训练标签数据可用第一方训练数据矩阵和样本标签进行表示,例如,假设所述第一方训练标签数据为(XA,Y),XA为所述第一方训练数据矩阵,Y为所述样本标签,另外地,所述第二设备拥有不带有样本标签的第二方训练数据,所述第二方训练数据可用第二方训练数据矩阵进行表示,例如,假设第二方训练数据矩阵为XB。In this embodiment, it should be noted that the first device and the second device are both participants in vertical federated learning. The first device has first-party training label data with sample labels, and the first-party training label data can be represented by a first-party training data matrix and sample labels. For example, assuming that the first-party training label data is (X A , Y), X A is the first-party training data matrix, and Y is the sample label. In addition, the second device has second-party training data without sample labels, and the second-party training data can be represented by a second-party training data matrix. For example, assuming that the second-party training data matrix is X B .
另外地,在本实施例中,所述因子分解机回归模型为基于纵向联邦学习构建的机器学习模型,所述因子分解机回归模型的模型参数为第一设备和第二设备共同持有,其中,所述因子分解机回归模型包括第一类型模型参数和第二类型模型参数,所述第一类型模型参数包括第一方第一类型模型参数和第二方第一类型模型参数,所述第二类型模型参数包括第一方第二类型模型参数和第二方第二类型模型参数,例如,假设所述第一类型模型参数为w,第二类型模型参数为V,则所述第一方第一类型模型参数为wA,所述第二方第一类型模型参数为wB,所述第一方第二类型模型参数为VA,所述第二方第二类型模型参数为VB。Additionally, in the present embodiment, the factor decomposition machine regression model is a machine learning model constructed based on longitudinal federated learning, and the model parameters of the factor decomposition machine regression model are jointly held by the first device and the second device, wherein the factor decomposition machine regression model includes first-type model parameters and second-type model parameters, the first-type model parameters include first-party first-type model parameters and second-party first-type model parameters, and the second-type model parameters include first-party second-type model parameters and second-party second-type model parameters. For example, assuming that the first-type model parameter is w and the second-type model parameter is V, the first-party first-type model parameter is w A , the second-party first-type model parameter is w B , the first-party second-type model parameter is VA , and the second-party second-type model parameter is VB .
另外地,需要说明的是,对数据进行秘密共享的过程为将数据拆分为两份子数据,且两份子数据分别由秘密共享的两方持有的过程,例如,假设秘密共享的两方为A和B,则对数据X进行秘密共享,则A持有数据X的第一份额[[X]]A,B持有数据X的第二份额[[X]]B,且X=[[X]]A+[[X]]B。In addition, it should be noted that the process of secretly sharing data is a process of splitting the data into two sub-data, and the two sub-data are respectively held by the two parties of the secret sharing. For example, assuming that the two parties of the secret sharing are A and B, then data X is secretly shared, then A holds the first share of data X [[X]] A , B holds the second share of data X [[X]] B , and X = [[X]] A + [[X]] B.
另外地,需要说明的是,所述因子分解机回归模型的模型表达式如下所示:In addition, it should be noted that the model expression of the factor decomposition machine regression model is as follows:
z(x)=<w,x>+∑i<j<Vi,Vj>xixj z(x)=<w, x>+∑ i<j <V i , V j >x i x j
其中,x为模型输入数据对应的数据矩阵,其中,所述模型输入数据包括第一方训练标签数据(XA,Y)和第二方训练数据XB,其中,Y为所述样本标签,XA具有dA个特征维度,XB具有dB个特征维度,第一类型模型参数为w,其中,w为d维向量,第二类型模型参数为V,其中,V为d*d的矩阵,且w=[wA,wB],也即,w是由第一方第一类型模型参数wA和第二方第一类型模型参数wB组成,其中,wA为dA维向量,wB为dB维向量,另外地,V=[VA,VB],其中,V是由第一方第二类型模型参数VA和所述第二方第二类型模型参数VB组成,其中,VA为dA*dX为矩阵,VB为dB*dX维矩阵,<w,x>为w和x的内积,Vi为V的第i列的列向量,Vj为V的第j列的列向量,xi为x的第i列的列向量,xj为x的第j列的列向量。Wherein, x is the data matrix corresponding to the model input data, wherein the model input data includes first-party training label data (X A , Y) and second-party training data X B , wherein Y is the sample label, X A has d A feature dimensions, X B has d B feature dimensions, the first type model parameter is w, wherein w is a d-dimensional vector, the second type model parameter is V, wherein V is a d*d matrix, and w=[w A , w B ], that is, w is composed of the first-party first type model parameter w A and the second-party first type model parameter w B , wherein w A is a d A- dimensional vector, w B is a d B -dimensional vector, additionally, V=[V A , V B ], wherein V is composed of the first-party second type model parameter VA and the second-party second type model parameter VB , wherein VA is a d A *d X -dimensional matrix, VB is a d B *d X- dimensional matrix, <w, x> is the inner product of w and x, Vi is the column vector of the i-th column of V, V j is the column vector of the j-th column of V, xi is the column vector of the i-th column of x, and xj is the column vector of the j-th column of x.
与第二设备进行秘密共享,获得秘密共享模型参数和秘密共享训练数据,具体地,获取所述因子分解机回归模型对应的初始化模型和第一方训练标签数据,并获取所述初始化模型对应的第一方第一类型模型参数和对应的第一方第二类型模型参数,相同地,所述第二设备在进行秘密共享之前获取所述第二方训练数据、所述初始化模型对应的第二方第一类型模型参数和对应的所述第二方第二类型模型参数,进而第一设备与第二设备进行秘密共享,其中,在进行秘密共享时,所述第一设备提供第一方训练标签数据、第一方第一类型模型参数和第一方第二类型模型参数,所述第二设备提供第二方训练数据、第二方第一类型模型参数和所述第二方第二类型模型参数,进而第一设备获得所述秘密共享模型参数和秘密共享训练数据,且所述第二设备获得己方拥有的第二方秘密共享模型参数和第二方秘密共享训练数据,其中,所述秘密共享模型参数包括第一方第一类型模型参数的第一份额、第一方第二类型模型参数的第一份额、第二方第一类型模型参数的第二份额和所述第二方第二类型模型参数的第二份额,所述第二方秘密共享模型参数包括第一方第一类型模型参数的第二份额、第一方第二类型模型参数的第二份额、第二方第一类型模型参数的第一份额和所述第二方第二类型模型参数的第一份额,所述秘密共享训练数据包括第一方训练标签数据的第一份额和第二方训练数据的第二份额,所述第二方秘密共享训练数据包括第一方训练标签数据的第二份额和第二方训练数据的第一份额。The method comprises the steps of: performing secret sharing with a second device, obtaining secret sharing model parameters and secret sharing training data, specifically, obtaining an initialization model and first-party training label data corresponding to the factor decomposition machine regression model, and obtaining first-party first-type model parameters and corresponding first-party second-type model parameters corresponding to the initialization model. Similarly, the second device obtains the second-party training data, the second-party first-type model parameters and the corresponding second-party second-type model parameters corresponding to the initialization model before performing secret sharing, and then the first device and the second device perform secret sharing, wherein when performing secret sharing, the first device provides the first-party training label data, the first-party first-type model parameters and the first-party second-type model parameters, and the second device provides the second-party training data, the second-party first-type model parameters and the second-party second-type model parameters, and then the first device obtains the secret sharing. The second device obtains the second-party secret shared model parameters and secret shared training data owned by the second party, wherein the secret shared model parameters include the first share of the first-type model parameters of the first party, the first share of the second-type model parameters of the first party, the second share of the first-type model parameters of the second party, and the second share of the second-type model parameters of the second party, the second-party secret shared model parameters include the second share of the first-type model parameters of the first party, the second share of the second-type model parameters of the first party, the first share of the first-type model parameters of the second party, and the first share of the second-type model parameters of the second party, the secret shared training data include the first share of the first-party training label data and the second share of the second-party training data, and the second-party secret shared training data include the second share of the first-party training label data and the first share of the second-party training data.
其中,所述秘密共享模型参数包括第一共享参数和第二共享参数,所述秘密共享训练数据包括第一共享训练数据和第二共享训练数据,The secret sharing model parameters include a first sharing parameter and a second sharing parameter, and the secret sharing training data include a first sharing training data and a second sharing training data.
所述与第二设备进行秘密共享,获得秘密共享模型参数和秘密共享训练数据的步骤包括:The step of performing secret sharing with the second device and obtaining secret sharing model parameters and secret sharing training data comprises:
步骤S11,获取第一方模型参数和第一方训练标签数据,并将所述第一方模型参数的第一份额作为所述第一共享参数;Step S11, obtaining a first-party model parameter and first-party training label data, and using a first share of the first-party model parameter as the first shared parameter;
在本实施例中,需要说明的是,所述第一方模型参数包括第一方第一类型模型参数和第一方第二类型模型参数,所述第二方秘密共享模型参数包括第三共享参数和第四共享参数。In this embodiment, it should be noted that the first-party model parameters include first-party first-type model parameters and first-party second-type model parameters, and the second-party secret sharing model parameters include third sharing parameters and fourth sharing parameters.
获取第一方模型参数和第一方训练标签数据,并将所述第一方模型参数的第一份额作为所述第一共享参数,具体地,将所述第一方第一类型模型参数、第一方第二类型模型参数和第一方训练标签数据均拆分为两份,并将所述第一方第一类型模型参数的第一份额和第一方第二类型模型参数的第一份额共同作为所述第一共享参数。Acquire first-party model parameters and first-party training label data, and use the first share of the first-party model parameters as the first shared parameters. Specifically, split the first-party first-type model parameters, the first-party second-type model parameters and the first-party training label data into two parts, and use the first share of the first-party first-type model parameters and the first share of the first-party second-type model parameters together as the first shared parameters.
步骤S12,将所述第一方模型参数的第二份额发送至所述第二设备,以供所述第二设备确定第三共享参数;Step S12, sending the second share of the first party model parameter to the second device, so that the second device can determine a third shared parameter;
在本实施例中,将所述第一方模型参数的第二份额发送至所述第二设备,以供所述第二设备确定第三共享参数,具体地,将所述第一方第一类型模型参数的第二份额和第一方第二类型模型参数的第二份额均发送至所述第二设备,进而所述第二设备将所述第一类型模型参数的第二份额和所述第一方第二类型模型参数的第二份额共同所述第三共享参数。In this embodiment, the second share of the first-party model parameters is sent to the second device so that the second device can determine the third shared parameters. Specifically, the second share of the first-party first-type model parameters and the second share of the first-party second-type model parameters are both sent to the second device, and then the second device uses the second share of the first-type model parameters and the second share of the first-party second-type model parameters together as the third shared parameters.
步骤S13,接收所述第二设备发送的第二共享参数,其中,所述第二共享参数为第二设备获取的第二方模型参数的第二份额,且所述第二方模型参数的第一份额为所述第二设备的第四共享参数;Step S13, receiving a second shared parameter sent by the second device, wherein the second shared parameter is a second share of a second-party model parameter acquired by the second device, and the first share of the second-party model parameter is a fourth shared parameter of the second device;
在本实施例中,接收所述第二设备发送的第二共享参数,其中,所述第二共享参数为第二设备获取的第二方模型参数的第二份额,且所述第二方模型参数的第一份额为所述第二设备的第四共享参数,具体地,所述第二设备分别将所述第二方第一类型模型参数和所述第二方第二类型模型参数拆分为两份,进而将所述第二方第一类型模型参数的第一份额和所述第二方第二类型模型参数的第一份额共同作为第四共享参数,并将所述第二方第一类型模型参数的第二份额和所述第二方第二类型模型参数的第二份额均发送至所述第一设备,进而所述第一设备接收所述第二方第一类型模型参数的第二份额和所述第二方第二类型模型参数的第二份额,并将所述第二方第一类型模型参数的第二份额和所述第二方第二类型模型参数的第二份额共同作为所述第二共享参数。In this embodiment, a second shared parameter sent by the second device is received, wherein the second shared parameter is the second share of the second-party model parameter obtained by the second device, and the first share of the second-party model parameter is the fourth shared parameter of the second device. Specifically, the second device splits the second-party first-type model parameter and the second-party second-type model parameter into two parts respectively, and then uses the first share of the second-party first-type model parameter and the first share of the second-party second-type model parameter together as the fourth shared parameter, and sends the second share of the second-party first-type model parameter and the second share of the second-party second-type model parameter to the first device, and then the first device receives the second share of the second-party first-type model parameter and the second share of the second-party second-type model parameter, and uses the second share of the second-party first-type model parameter and the second share of the second-party second-type model parameter together as the second shared parameter.
步骤S14,将所述第一方训练标签数据的第一份额作为所述第一共享训练数据,并将所述第一方训练标签数据的第二份额发送至所述第二设备,以供所述第二设备确定第三共享训练数据;Step S14, using the first share of the first-party training label data as the first shared training data, and sending the second share of the first-party training label data to the second device, so that the second device can determine the third shared training data;
在本实施例中,需要说明的是,所述第二方秘密共享训练数据包括第三共享训练数据和第四共享训练数据。In this embodiment, it should be noted that the second-party secret shared training data includes third shared training data and fourth shared training data.
将所述第一方训练标签数据的第一份额作为所述第一共享训练数据,并将所述第一方训练标签数据的第二份额发送至所述第二设备,以供所述第二设备确定第三共享训练数据,具体地,将所述第一方训练标签数据拆分为两份,进而将所述第一方训练标签数据的第一份额作为第一共享训练数据,并将所述第一方训练标签数据的第二份额发送至所述第二设备,进而所述第二设备将所述第一方训练标签数据的第二份额作为第三共享训练数据。The first share of the first-party training label data is used as the first shared training data, and the second share of the first-party training label data is sent to the second device, so that the second device determines the third shared training data. Specifically, the first-party training label data is split into two parts, and then the first share of the first-party training label data is used as the first shared training data, and the second share of the first-party training label data is sent to the second device, and then the second device uses the second share of the first-party training label data as the third shared training data.
步骤S15,接收第二设备发送的第二共享训练数据,其中,所述第二共享训练数据为第二设备获取的第二方训练数据的第二份额,且所述第二方训练数据的第一份额为所述第二设备的第四共享训练数据。Step S15: receiving second shared training data sent by the second device, wherein the second shared training data is a second share of second-party training data acquired by the second device, and the first share of the second-party training data is fourth shared training data of the second device.
在本实施例中,接收第二设备发送的第二共享训练数据,其中,所述第二共享训练数据为第二设备获取的第二方训练数据的第二份额,且所述第二方训练数据的第一份额为所述第二设备的第四共享训练数据,具体地,所述第二设备将所述第二方训练数据拆分为两份,并将所述第二方训练数据的第一份额作为所述第四共享训练数据,将所述第二方训练数据的第二份额发送至所述第一设备,所述第一设备将所述第二方训练数据的第一份额作为所述第二共享训练数据。In this embodiment, second shared training data sent by a second device is received, wherein the second shared training data is the second share of the second-party training data acquired by the second device, and the first share of the second-party training data is the fourth shared training data of the second device. Specifically, the second device splits the second-party training data into two parts, and uses the first share of the second-party training data as the fourth shared training data, sends the second share of the second-party training data to the first device, and the first device uses the first share of the second-party training data as the second shared training data.
步骤S20,基于所述秘密共享训练数据和所述秘密共享模型参数,与所述第二设备进行纵向联邦学习建模,计算秘密共享回归误差;Step S20, based on the secret sharing training data and the secret sharing model parameters, perform longitudinal federated learning modeling with the second device to calculate the secret sharing regression error;
在本实施例中,需要说明的是,所述秘密共享训练数据包括第一共享参训练数据和第二共享训练数据,其中,所述第一共享训练数据为第一方训练标签数据的第一份额,所述第二共享训练数据为所述第二方训练数据的第二份额,所述秘密共享模型参数包括第一共享参数和第二共享参数,其中,所述第一共享参数包括第一方第一类型模型参数的第一份额进而第一方第二类型模型参数的第一份额,所述第二共享参数包括第二方第一类型模型参数的第二份额和所述第二方第二类型模型参数的第二份额。In this embodiment, it should be noted that the secret shared training data includes first shared training data and second shared training data, wherein the first shared training data is the first share of the first party's training label data, and the second shared training data is the second share of the second party's training data, and the secret shared model parameters include first shared parameters and second shared parameters, wherein the first shared parameters include the first share of the first party's first type model parameters and then the first share of the first party's second type model parameters, and the second shared parameters include the second share of the second party's first type model parameters and the second share of the second party's second type model parameters.
另外地,所述第二设备在进行纵向联邦学习建模时提供第二方秘密共享训练数据和第二方秘密共享模型参数,其中,所述第二方秘密共享训练数据包括第三共享训练数据和第四共享训练数据,其中,所述第三共享训练数据为第一方训练标签数据的第二份额,所述第四共享训练数据为所述第二方训练数据的第一份额,所述秘密共享模型参数包括第三共享参数和第四共享参数,其中,所述第三共享参数包括第一方第一类型模型参数的第二份额和第一方第二类型模型参数的第二份额,所述第四共享参数包括第二方第一类型模型参数的第一份额和所述第二方第二类型模型参数的第一份额。Additionally, the second device provides second-party secret shared training data and second-party secret shared model parameters when performing vertical federated learning modeling, wherein the second-party secret shared training data includes third shared training data and fourth shared training data, wherein the third shared training data is the second share of the first-party training label data, and the fourth shared training data is the first share of the second-party training data, and the secret shared model parameters include third shared parameters and fourth shared parameters, wherein the third shared parameters include the second share of the first-party first-type model parameters and the second share of the first-party second-type model parameters, and the fourth shared parameters include the first share of the second-party first-type model parameters and the first share of the second-party second-type model parameters.
基于所述秘密共享训练数据和所述秘密共享模型参数,与所述第二设备进行纵向联邦学习建模,计算秘密共享回归误差,具体地,基于所述第一共享参数、所述第二共享模型参数、第一共享训练数据和第二共享训练,与所述第二设备进行联邦交互,其中,所述第二设备在进行联邦交互时提供第三共享参数、第四共享模型参数、第三共享训练数据和第四共享训练数据,计算秘密共享中间参数,进而基于所述秘密共享中间参数,通过预设秘密共享回归误差计算公式,计算秘密共享回归误差。Based on the secret sharing training data and the secret sharing model parameters, longitudinal federated learning modeling is performed with the second device to calculate the secret sharing regression error. Specifically, based on the first shared parameters, the second shared model parameters, the first shared training data and the second shared training, a federal interaction is performed with the second device, wherein the second device provides a third shared parameter, a fourth shared model parameter, the third shared training data and the fourth shared training data when performing the federal interaction, calculates the secret sharing intermediate parameters, and then based on the secret sharing intermediate parameters, the secret sharing regression error is calculated by using a preset secret sharing regression error calculation formula.
其中,所述秘密共享模型参数包括第一类型共享参数和第二类型共享参数,所述秘密共享训练数据包括秘密共享标签数据,The secret sharing model parameters include first type sharing parameters and second type sharing parameters, the secret sharing training data include secret sharing label data,
所述基于所述秘密共享训练数据和所述秘密共享模型参数,与所述第二设备进行纵向联邦学习建模,计算秘密共享回归误差的步骤包括:The step of performing longitudinal federated learning modeling with the second device based on the secret sharing training data and the secret sharing model parameters and calculating the secret sharing regression error comprises:
步骤S21,基于预设秘密共享机制,通过与所述第二设备进行联邦交互,计算所述第二类型共享参数和所述秘密共享训练数据共同对应的秘密共享交叉特征项内积;Step S21, based on a preset secret sharing mechanism, by performing federated interaction with the second device, calculating the inner product of the secret sharing cross-feature term corresponding to the second type of sharing parameter and the secret sharing training data;
在本实施例中,需要说明的是,所述预设秘密共享机制包括秘密共享加法和秘密共享乘法,所述第一类型共享参数包括第一方第一类型模型参数的第一份额和第二方第一类型模型参数的第二份额,所述第二类型共享参数包括第一方第二类型模型参数的第一份额和所述第二方第二类型模型参数的第二份额,且所述第二设备拥有第二方第一类型共享参数和第二方第二类型共享参数,其中,所述第二方第一类型共享参数包括第一方第一类型模型参数的第二份额和第二方第一类型模型参数的第一份额,所述第二方第二类型共享参数包括第一方第二类型模型参数的第二份额和所述第二方第二类型模型参数的第一份额,所述秘密共享标签数据为秘密共享的样本标签。In this embodiment, it should be noted that the preset secret sharing mechanism includes secret sharing addition and secret sharing multiplication, the first type of shared parameters include the first share of the first party's first type model parameters and the second share of the second party's first type model parameters, the second type of shared parameters include the first share of the first party's second type model parameters and the second share of the second party's second type model parameters, and the second device has the second party's first type shared parameters and the second party's second type shared parameters, wherein the second party's first type shared parameters include the second share of the first party's first type model parameters and the first share of the second party's first type model parameters, the second party's second type shared parameters include the second share of the first party's second type model parameters and the first share of the second party's second type model parameters, and the secret sharing label data is a sample label for secret sharing.
基于预设秘密共享机制,通过与所述第二设备进行联邦交互,计算所述第二类型共享参数和所述秘密共享训练数据共同对应的秘密共享交叉特征项内积,具体地,基于秘密共享乘法,通过与所述第二设备进行联邦交互,计算所述第二类型共享参数中每一参数元素与所述秘密共享训练数据每一训练数据元素之间的交叉内积,其中,一所述参数元素与一所述训练数据元素之间存在一所述交叉内积,进而对各所述交叉内进行累加,获得所述秘密共享交叉特征项内积,另外地,在进行联邦交互时,基于秘密共享乘法,所述第二设备将计算所述第二方第二类型共享参数中每一第二方参数元素和所述第二方秘密共享训练数据中每一第二训练数据元素之间的第二方交叉内积,获得第二方秘密共享交叉特征项内积。Based on the preset secret sharing mechanism, by performing federal interaction with the second device, the secret sharing cross-characteristic inner product corresponding to the second type of shared parameters and the secret sharing training data is calculated. Specifically, based on secret sharing multiplication, by performing federal interaction with the second device, the cross inner product between each parameter element in the second type of shared parameters and each training data element of the secret sharing training data is calculated, wherein there is a cross inner product between a parameter element and a training data element, and then each cross inner product is accumulated to obtain the secret sharing cross-characteristic inner product. In addition, when performing federal interaction, based on secret sharing multiplication, the second device will calculate the second-party cross inner product between each second-party parameter element in the second type of shared parameters of the second party and each second training data element in the second-party secret sharing training data to obtain the second-party secret sharing cross-characteristic inner product.
其中,所述第二类型共享参数包括第一共享第二类型模型参数和第二共享第二类型模型参数,所述秘密共享训练数据包括第一共享训练数据和第二共享训练数据,所述秘密共享交叉特征项内积包括第一交叉特征项内积和第二交叉特征项内积,所述预设秘密共享机制包括秘密共享乘法,The second type of shared parameters includes a first shared second type model parameter and a second shared second type model parameter, the secret shared training data includes a first shared training data and a second shared training data, the secret shared cross-feature item inner product includes a first cross-feature item inner product and a second cross-feature item inner product, and the preset secret sharing mechanism includes a secret shared multiplication.
所述基于预设秘密共享机制,通过与所述第二设备进行联邦交互,计算所述第二类型共享参数和所述秘密共享训练数据共同对应的秘密共享交叉特征项内积的步骤包括:The step of calculating the inner product of the secret sharing cross-feature term corresponding to the second type of sharing parameter and the secret sharing training data based on the preset secret sharing mechanism by performing federated interaction with the second device comprises:
步骤S211,基于所述秘密共享乘法,通过与所述第二设备进行联邦交互,计算所述第一共享第二类型模型参数中各元素和所述第一共享训练数据中各元素之间的交叉内积,获得各第一元素交叉内积;Step S211, based on the secret sharing multiplication, by performing federation interaction with the second device, calculating the cross inner product between each element in the first shared second type model parameter and each element in the first shared training data, to obtain the cross inner product of each first element;
在本实施例中,需要说明的是,所述第一共享第二类型模型参数可为矩阵形式的参数,其中,所述第一共享第二类型模型参数为第二方第二类型模型参数的第二份额,所述第一共享第二类型模型参数的每一列为一第一参数元素,所述第一共享训练数据可为矩阵形式的训练数据,其中,所述第一共享训练数据为第二方训练数据的第二份额,所述第一共享训练数据的每一列为一第一训练数据元素。In this embodiment, it should be noted that the first shared second-type model parameter may be a parameter in matrix form, wherein the first shared second-type model parameter is the second share of the second-party second-type model parameter, and each column of the first shared second-type model parameter is a first parameter element. The first shared training data may be training data in matrix form, wherein the first shared training data is the second share of the second-party training data, and each column of the first shared training data is a first training data element.
基于所述秘密共享乘法,通过与所述第二设备进行联邦交互,计算所述第一共享第二类型模型参数中各元素和所述第一共享训练数据中各元素之间的交叉内积,获得各第一元素交叉内积,具体地,获取所述秘密共享乘法对应的第一秘密共享乘法三元组,进而基于所述第一秘密共享乘法三元组,通过秘密共享乘法,与所述第二设备进行联邦交互,计算每一所述第一参数元素和每一所述第一训练数据元素之间的内积,获得各所述第一元素交叉内积,其中,所述第二设备在与所述第一设备进行联邦交互时,计算每一所述第一元素交叉内积对应的第二方第一元素交叉内积。Based on the secret sharing multiplication, by performing federal interaction with the second device, the cross inner product between each element in the first shared second type model parameter and each element in the first shared training data is calculated to obtain each first element cross inner product. Specifically, a first secret sharing multiplication triplet corresponding to the secret sharing multiplication is obtained, and then based on the first secret sharing multiplication triplet, by performing federal interaction with the second device through secret sharing multiplication, the inner product between each first parameter element and each first training data element is calculated to obtain each first element cross inner product, wherein the second device calculates the second party first element cross inner product corresponding to each first element cross inner product when performing federal interaction with the first device.
其中,在一种可实施的方案中,假设第一设备拥有秘密共享乘法三元组([[a]]A,[[b]]A,[[c]]A),第二设备拥有秘密共享乘法三元组([[a]]B,[[b]]B,[[c]]B),其中,[[a]]A+[[a]]B=a,[[b]]A+[[b]]B=b,[[c]]A+[[c]]B=c,c=a*b,且所述第一参数元素为秘密共享的[[x]]A,所述第一训练数据元素为[[y]]A,所述第二设备中所述第一参数元素对应的参数元素为[[x]]B,所述第一训练数据元素对应的训练数据元素为[[y]]B,其中,[[x]]A+[[x]]B=x,[[y]]A+[[y]]B=y,则第一设备计算的所述第一元素交叉内积为秘密共享的[[x*y]]A,第二设备计算的所述第二方第一元素交叉内积为[[x*y]]B,且[[x*y]]A+[[x*y]]B=x*y,进而,具体地,计算流程如下:In one implementable scheme, it is assumed that the first device has a secret shared multiplication triplet ([[a]] A , [[b]] A , [[c]] A ), and the second device has a secret shared multiplication triplet ([[a]] B , [[b]] B , [[c]] B ), wherein [[a]] A +[[a]] B = a, [[b]] A +[[b]] B = b, [[c]] A +[[c]] B = c, c = a*b, and the first parameter element is the secret shared [[x]] A , the first training data element is [[y]] A , the parameter element corresponding to the first parameter element in the second device is [[x]] B , and the training data element corresponding to the first training data element is [[y]] B , wherein [[x]] A +[[x]] B = x, [[y]] A +[[y]] B =y, then the first element cross inner product calculated by the first device is the secret shared [[x*y]] A , the second party first element cross inner product calculated by the second device is [[x*y]] B , and [[x*y]] A +[[x*y]] B =x*y. Specifically, the calculation process is as follows:
首先,第一设备计算[[e]]A=[[x]]A-[[a]]A和[[f]]A=[[y]]A-[[b]]A,第二设备计算[[e]]B=[[x]]B-[[a]]B和[[f]]B=[[y]]B-[[b]]B,进而第一设备将[[e]]A和[[f]]A发送至第二设备,第二设备将[[e]]B和[[f]]B发送至第二设备,进而第一设备和第二设备均获得e=x-a和f=y-b,进而第一设备计算[[x*y]]A=f*[[a]]A+e*[[b]]A+[[c]]A,第二设备计算[[x*y]]B=e*f+f*[[a]]B+e*[[b]]B+[[c]]B,进而[[x*y]]A+[[x*y]]B=e*f+f*a+e*b+c,进而将e=x-a和f=y-b代入该计算表达式,即可获得[[x*y]]A+[[x*y]]B=x*y,也即,所述第一元素交叉内积和所述第二方第一元素交叉内积计算完毕。First, the first device calculates [[e]] A = [[x]] A - [[a]] A and [[f]] A = [[y]] A - [[b]] A , and the second device calculates [[e]] B = [[x]] B - [[a]] B and [[f]] B = [[y]] B - [[b]] B. Then, the first device sends [[e]] A and [[f]] A to the second device, and the second device sends [[e]] B and [[f]] B to the second device. Then, both the first device and the second device obtain e = xa and f = yb. Then, the first device calculates [[x*y]] A = f*[[a]] A + e*[[b]] A + [[c]] A , and the second device calculates [[x*y]] B = e*f+f*[[a]] B + e*[[b]] B + [[c]] B , and then [[x*y]] A +[[x*y]] B = e*f+f*a+e*b+c, and then substituting e=xa and f=yb into the calculation expression, we can obtain [[x*y]] A +[[x*y]] B = x*y, that is, the cross inner product of the first elements and the cross inner product of the second first elements are calculated.
步骤S212,基于所述秘密共享乘法,通过与所述第二设备进行联邦交互,计算所述第二共享第二类型模型参数中各元素和所述第二共享训练数据中各元素之间的交叉内积,获得各第二元素交叉内积;Step S212: Based on the secret sharing multiplication, by performing federated interaction with the second device, a cross inner product between each element in the second shared second type model parameter and each element in the second shared training data is calculated to obtain a cross inner product of each second element;
在本实施例中,需要说明的是,所述第二共享第二类型模型参数可为矩阵形式的参数,其中,所述第二共享第二类型模型参数为第一方第二类型模型参数的第一份额,所述第二共享第二类型模型参数的每一列为一第二参数元素,所述第二共享训练数据可为矩阵形式的训练数据,其中,所述第二共享训练数据为所述第一方训练标签数据的第一份额,所述第二共享训练数据的每一列为一第二训练数据元素。In this embodiment, it should be noted that the second shared second type model parameter may be a parameter in matrix form, wherein the second shared second type model parameter is the first share of the first party's second type model parameter, and each column of the second shared second type model parameter is a second parameter element, and the second shared training data may be training data in matrix form, wherein the second shared training data is the first share of the first party's training label data, and each column of the second shared training data is a second training data element.
基于所述秘密共享乘法,通过与所述第二设备进行联邦交互,计算所述第二共享第二类型模型参数中各元素和所述第二共享训练数据中各元素之间的交叉内积,获得各第二元素交叉内积,具体地,获取所述秘密共享乘法对应的第二秘密共享乘法三元组,进而基于所述第二秘密共享乘法三元组,通过秘密共享乘法,与所述第二设备进行联邦交互,计算每一所述第二参数元素和每一所述第二训练数据元素之间的内积,获得各所述第二元素交叉内积。Based on the secret sharing multiplication, by performing federal interaction with the second device, the cross inner product between each element in the second shared second type model parameter and each element in the second shared training data is calculated to obtain each cross inner product of the second elements. Specifically, a second secret sharing multiplication triplet corresponding to the secret sharing multiplication is obtained, and then based on the second secret sharing multiplication triplet, by performing federal interaction with the second device through secret sharing multiplication, the inner product between each second parameter element and each second training data element is calculated to obtain each cross inner product of the second elements.
步骤S213,分别对各所述第一元素交叉内积和各所述第二元素交叉内积进行累加,获得各所述第一元素交叉内积对应的所述第一交叉特征项内积和各所述第二元素交叉内积对应的所述第二交叉特征项内积。Step S213, respectively accumulating the first element cross inner products and the second element cross inner products to obtain the first cross characteristic term inner products corresponding to the first element cross inner products and the second cross characteristic term inner products corresponding to the second element cross inner products.
在本实施例中,分别对各所述第一元素交叉内积和各所述第二元素交叉内积进行累加,获得各所述第一元素交叉内积对应的所述第一交叉特征项内积和各所述第二元素交叉内积对应的所述第二交叉特征项内积,具体地,对各所述第一元素交叉内积进行累加,获得所述第一交叉特征项内积,并对各所述第二元素交叉内积进行累加,获得所述第二交叉特征项内积,其中,所述第一交叉特征项内积的计算表达式如下:In this embodiment, each of the first element cross inner products and each of the second element cross inner products are accumulated respectively to obtain the first cross feature item inner product corresponding to each of the first element cross inner products and the second cross feature item inner product corresponding to each of the second element cross inner products. Specifically, each of the first element cross inner products is accumulated to obtain the first cross feature item inner product, and each of the second element cross inner products is accumulated to obtain the second cross feature item inner product, wherein the calculation expression of the first cross feature item inner product is as follows:
其中,为第一设备拥有的秘密共享的第一参数元素也即为所述第二方第二类型模型参数的第二份额中的列向量,第一设备拥有的秘密共享的第一训练数据元素,也即为第二方训练数据的第二份额中的列向量,另外地,所述第二特征交叉特征项内积的计算公式如下:in, is a column vector of the first parameter element of the secret share owned by the first device, that is, the second share of the second type model parameter of the second party, The first training data element of the secret share owned by the first device is the column vector in the second share of the second party's training data. In addition, the calculation formula of the inner product of the second feature cross feature term is as follows:
其中,为第一设备拥有的秘密共享的第二参数元素也即为第一方第二类型模型参数的第一份额中的列向量,第一设备拥有的秘密共享的第二训练数据元素,也即为第一方训练标签数据的第一份额中的列向量。in, is a column vector of the second parameter element of the secret share owned by the first device, that is, the first share of the second type model parameter of the first party, The second training data element of the secret share owned by the first device is the column vector in the first share of the first-party training label data.
另外地,第二设备计算第二方第二交叉特征项内积的计算公式如下:Additionally, the second device calculates the inner product of the second cross-feature term of the second party using the following formula:
其中,为第二设备拥有的秘密共享的第二方第一参数元素,也即为所述第二方第二类型模型参数的第一份额中的列向量,第二设备拥有的秘密共享的第二方第二训练数据元素,也即为第一方训练标签数据的第二份额中的列向量,另外地,第二设备计算第二方第二交叉特征项内积计算公式如下:in, is a second party first parameter element of the secret share owned by the second device, that is, a column vector in the first share of the second party second type model parameter, The second training data element of the secret share owned by the second device is the column vector in the second share of the first training label data. In addition, the second device calculates the inner product of the second cross-feature term of the second party as follows:
其中,为第二设备拥有的秘密共享的第二方第二参数元素也即为第一方第二类型模型参数的第二份额中的列向量,第二设备拥有的秘密共享的第二方第二训练数据元素,也即为第一方训练标签数据的第二份额中的列向量。in, is a column vector of the second party second parameter element of the secret share owned by the second device, that is, the second share of the first party second type model parameter, The second party second training data element of the secret share possessed by the second device is also a column vector in the second share of the first party training label data.
步骤S22,基于所述预设秘密共享机制,通过与所述第二设备进行联邦交互,计算所述秘密共享交叉特征项内积、所述秘密共享训练数据、所述第一类型共享参数和所述第二类型共享参数共同对应的秘密共享中间参数;Step S22, based on the preset secret sharing mechanism, by performing federated interaction with the second device, calculating the secret sharing intermediate parameter corresponding to the inner product of the secret sharing cross-feature term, the secret sharing training data, the first type of sharing parameter and the second type of sharing parameter;
在本实施例中,基于所述预设秘密共享机制,通过与所述第二设备进行联邦交互,计算所述秘密共享交叉特征项内积、所述秘密共享训练数据、所述第一类型共享参数和所述第二类型共享参数共同对应的秘密共享中间参数,具体地,基于所述秘密共享乘法对应的第三秘密共享乘法三元组,通过与所述第二设备进行联邦交互,基于所述第一类型共享参数和所述秘密共享训练数据,计算第一中间参数项,并基于所述第二类型共享参数、所述秘密共享交叉特征项内积和所述秘密共享训练数据,计算第二中间参数项,进而计算所述第一中间参数项和所述第二中间参数项之和,获得所述秘密共享中间参数。In this embodiment, based on the preset secret sharing mechanism, by performing federal interaction with the second device, the secret sharing intermediate parameter corresponding to the inner product of the secret sharing cross-feature terms, the secret sharing training data, the first type of shared parameters and the second type of shared parameters is calculated. Specifically, based on the third secret sharing multiplication triplet corresponding to the secret sharing multiplication, by performing federal interaction with the second device, the first intermediate parameter item is calculated based on the first type of shared parameters and the secret sharing training data, and the second intermediate parameter item is calculated based on the second type of shared parameters, the inner product of the secret sharing cross-feature terms and the secret sharing training data, and then the sum of the first intermediate parameter item and the second intermediate parameter item is calculated to obtain the secret sharing intermediate parameter.
其中,所述预设秘密共享机制包括秘密共享乘法和秘密共享加法,The preset secret sharing mechanism includes secret sharing multiplication and secret sharing addition.
所述基于所述预设秘密共享机制,通过与所述第二设备进行联邦交互,计算所述秘密共享交叉特征项内积、所述秘密共享训练数据、所述第一类型共享参数和所述第二类型共享参数共同对应的秘密共享中间参数的步骤包括:The step of calculating the secret sharing intermediate parameter corresponding to the secret sharing cross-feature term inner product, the secret sharing training data, the first type of sharing parameter and the second type of sharing parameter by performing federated interaction with the second device based on the preset secret sharing mechanism comprises:
步骤S221,基于所述秘密共享乘法,通过与所述第二设备进行联邦交互,计算所述第一类型共享参数和所述秘密共享训练数据共同对应的第一中间参数项;Step S221, based on the secret sharing multiplication, by performing federation interaction with the second device, calculating a first intermediate parameter item corresponding to the first type of shared parameter and the secret sharing training data;
在本实施例中,需要说明的是,所述第一中间参数项包括第一共享中间参数项和第二共享中间参数项,所述第一类型共享参数包括第一共享第一类型模型参数和第二共享第一类型模型参数,其中,所述第一共享第一类型模型参数为第二方第一类型模型参数的第二份额,所述第二共享第一类型模型参数为所述第一方第一类型模型参数的第一份额,所述秘密共享训练数据包括第一共享训练数据和第二共享训练数据,其中,所述第一共享训练数据为第二方训练数据的第二份额,所述第二共享训练数据为第一方训练数据的第一份额。In this embodiment, it should be noted that the first intermediate parameter item includes a first shared intermediate parameter item and a second shared intermediate parameter item, the first type of shared parameter includes a first shared first type model parameter and a second shared first type model parameter, wherein the first shared first type model parameter is the second share of the second party's first type model parameter, and the second shared first type model parameter is the first share of the first party's first type model parameter, and the secret shared training data includes first shared training data and second shared training data, wherein the first shared training data is the second share of the second party's training data, and the second shared training data is the first share of the first party's training data.
基于所述秘密共享乘法,通过与所述第二设备进行联邦交互,计算所述第一类型共享参数和所述秘密共享训练数据共同对应的第一中间参数项,具体地,基于所述第三秘密共享乘法三元组,通过与所述第二设备进行联邦交互,分别计算所述第一共享第一类型模型参数和所述第一共享训练数据的各列向量的内积,获得各第一中间参数内积,并将各所述第一中间参数内积累加,获得所述第一共享中间参数项,并分别计算所述第二共享第一类型模型参数和所述第二共享训练数据的各列向量的内积,获得各第二中间参数内积,并将各所述第二中间参数内积累加,获得所述第二共享中间参数项,其中,所述第一共享中间参数项的计算表达式如下所示:Based on the secret sharing multiplication, by performing federal interaction with the second device, a first intermediate parameter item corresponding to the first type of shared parameter and the secret shared training data is calculated. Specifically, based on the third secret sharing multiplication triplet, by performing federal interaction with the second device, the inner products of the first shared first type model parameter and each column vector of the first shared training data are respectively calculated to obtain each first intermediate parameter inner product, and each first intermediate parameter inner product is accumulated to obtain the first shared intermediate parameter item, and the inner products of the second shared first type model parameter and each column vector of the second shared training data are respectively calculated to obtain each second intermediate parameter inner product, and each second intermediate parameter inner product is accumulated to obtain the second shared intermediate parameter item, wherein the calculation expression of the first shared intermediate parameter item is as follows:
其中,M1为所述第一共享中间参数项,dB表示XB的特征维度为dB,[[wB]]A为第一设备的秘密共享的第一共享第一类型模型参数,也即为第二方第一类型模型参数的第二份额,[[XB]]A为所述第一设备的秘密共享的第一共享训练数据中的元素,也即为第二方训练数据的第二份额的列向量,XB为所述第一共享训练数据,另外地,所述第二共享中间参数项的计算表达式如下所示:Wherein, M1 is the first shared intermediate parameter item, d B indicates that the feature dimension of X B is d B , [[w B ]] A is the first shared first type model parameter of the secret sharing of the first device, that is, the second share of the first type model parameter of the second party, [[X B ]] A is an element in the first shared training data of the secret sharing of the first device, that is, the column vector of the second share of the second party training data, X B is the first shared training data, and in addition, the calculation expression of the second shared intermediate parameter item is as follows:
其中,M2为所述第二共享中间参数项,dA表示XA的特征维度为dA,[[wA]]A为第一设备的秘密共享的第二共享第一类型模型参数,也即为第一方第一类型模型参数的第一份额,[[XA]]A为所述第一设备的秘密共享的第二共享训练数据中的元素,也即为第一方训练标签数据的第一份额的列向量,XA为所述第二共享训练数据。Among them, M 2 is the second shared intermediate parameter item, d A indicates that the feature dimension of X A is d A , [[w A ]] A is the second shared first type model parameter of the secret sharing of the first device, that is, the first share of the first type model parameter of the first party, [[X A ]] A is an element in the second shared training data of the secret sharing of the first device, that is, the column vector of the first share of the first party training label data, and X A is the second shared training data.
另外地,需要说明的时,所述第二设备将基于第二方第一类型模型参数的第一份额和所述第二方训练数据的第一份额,计算第二方第一共享中间参数项,其中,所述第二方第一共享中间参数项与所述第一共享中间参数项的计算方式一致,且所述第二设备将基于所述第一方第一类型模型参数的第二份额和第一方训练标签数据的第二份额,计算第二方第二共享中间参数项,其中,所述第二方第二共享中间参数项与所述第二共享中间参数项的计算方式一致。In addition, if necessary, the second device will calculate the second party's first shared intermediate parameter item based on the first share of the second party's first type model parameters and the first share of the second party's training data, wherein the second party's first shared intermediate parameter item is calculated in the same manner as the first shared intermediate parameter item, and the second device will calculate the second party's second shared intermediate parameter item based on the first party's second share of the first type model parameters and the second share of the first party's training label data, wherein the second party's second shared intermediate parameter item is calculated in the same manner as the second shared intermediate parameter item.
步骤S222,基于所述秘密共享加法和所述秘密共享乘法,计算所述秘密共享交叉特征项内积、所述秘密共享训练数据和所述第二类型共享参数共同对应的第二中间参数项;Step S222, calculating, based on the secret sharing addition and the secret sharing multiplication, a second intermediate parameter item corresponding to the inner product of the secret sharing cross-feature item, the secret sharing training data and the second type of shared parameter;
在本实施例中,需要说明的是,所述第二中间参数项包括第三共享中间参数项和第四共享中间参数项。In this embodiment, it should be noted that the second intermediate parameter item includes a third shared intermediate parameter item and a fourth shared intermediate parameter item.
基于所述秘密共享加法和所述秘密共享乘法,计算所述秘密共享交叉特征项内积、所述秘密共享训练数据和所述第二类型共享参数共同对应的第二中间参数项,具体地,获取第一共享第二类型模型参数对应的第一转置矩阵和所述第一共享训练数据对应的第二转置矩阵,进而基于所述秘密共享乘法,通过与第二设备进行联邦交互,计算所述第一共享第二类型模型参数、第一转置矩阵、所述第一共享训练数据和所述第二转置矩阵的内积,获得第一内积项,并基于所述第一交叉特征项内积和所述第一内积项,计算第三共享中间参数项,相同地,获取第二共享第二类型模型参数对应的第三转置矩阵和所述第二共享训练数据对应的第四转置矩阵,进而基于所述秘密共享乘法,通过与第二设备进行联邦交互,计算所述第二共享第二类型模型参数、第三转置矩阵、所述第二共享训练数据和所述第四转置矩阵的内积,获得第二内积项,并基于所述第二交叉特征项内积和所述第二内积项,计算第四共享中间参数项,其中,所述第三共享中间参数项的表达式如下所示:Based on the secret sharing addition and the secret sharing multiplication, the inner product of the secret sharing cross-characteristic term, the second intermediate parameter term corresponding to the secret sharing training data and the second type of shared parameters is calculated. Specifically, a first transposed matrix corresponding to the first shared second type model parameter and a second transposed matrix corresponding to the first shared training data are obtained. Then, based on the secret sharing multiplication, the inner product of the first shared second type model parameter, the first transposed matrix, the first shared training data and the second transposed matrix is calculated by federal interaction with the second device to obtain a first inner product term, and a third shared intermediate parameter term is calculated based on the first cross-characteristic term inner product and the first inner product term. Similarly, a third transposed matrix corresponding to the second shared second type model parameter and a fourth transposed matrix corresponding to the second shared training data are obtained. Then, based on the secret sharing multiplication, the inner product of the second shared second type model parameter, the third transposed matrix, the second shared training data and the fourth transposed matrix is calculated by federal interaction with the second device to obtain a second inner product term, and a fourth shared intermediate parameter term is calculated based on the second cross-characteristic term inner product and the second inner product term, wherein the expression of the third shared intermediate parameter term is as follows:
其中,[[]]A表示秘密共享后第一设备拥有的部分份额的数据,VB为第二方第二类型共享参数,XB为第二方训练数据,为VB的列向量,且VB具有dx个列向量,为XB的列向量,且XB具有dB个列向量,另外地,所述第四共享中间参数项的表达式如下所示:Wherein, [[]] A represents the partial share of data owned by the first device after secret sharing, VB is the second type of sharing parameter of the second party, XB is the training data of the second party, is a column vector of VB , and VB has d x column vectors, is a column vector of X B , and X B has d B column vectors. Additionally, the expression of the fourth shared intermediate parameter term is as follows:
其中,[[]]A表示秘密共享后第一设备拥有的部分份额的数据,VA为第一方第二类型模型参数,XA为第一方训练标签数据,为VA的列向量,且VA具有dx个列向量,为XA的列向量,且XA具有dA个列向量。Wherein, [[]] A represents the partial share of data owned by the first device after secret sharing, V A is the first-party second-type model parameter, X A is the first-party training label data, is the column vector of VA , and VA has d x column vectors, is a column vector of X A , and X A has d A column vectors.
另外地,需要说明的是,所述第二设备将基于秘密共享后第二设备拥有的部分份额的数据,计算第二方第三共享中间参数项和第二方第四共享中间参数项,且计算方式与第一设备中的计算方式一致,所述第二方第三共享中间参数项和所述第二方第四共享中间参数项如下所示:In addition, it should be noted that the second device will calculate the second party third shared intermediate parameter item and the second party fourth shared intermediate parameter item based on the partial share of data owned by the second device after secret sharing, and the calculation method is consistent with the calculation method in the first device. The second party third shared intermediate parameter item and the second party fourth shared intermediate parameter item are as follows:
其中,[[]]B表示秘密共享后第二设备拥有的部分份额的数据。Among them, [[]] B represents the partial share of data owned by the second device after secret sharing.
步骤S223,基于所述第一中间参数项和所述第二中间参数项,计算所述秘密共享中间参数。Step S223: Calculate the secret sharing intermediate parameter based on the first intermediate parameter item and the second intermediate parameter item.
在本实施例中,所述秘密共享中间参数包括第一秘密共享中间参数和第二秘密共享中间参数。In this embodiment, the secret sharing intermediate parameters include a first secret sharing intermediate parameter and a second secret sharing intermediate parameter.
基于所述第一中间参数项和所述第二中间参数项,计算所述秘密共享中间参数,具体地,计算所述第一共享中间参数项和所述第三共享中间参数项之和,获得第一秘密共享中间参数,并计算所述第二共享中间参数项和所述第四共享中间参数项之和,获得第二秘密共享中间参数,其中,所述第一秘密共享中间参数的计算表达式如下所示:Based on the first intermediate parameter item and the second intermediate parameter item, the secret sharing intermediate parameter is calculated. Specifically, the sum of the first shared intermediate parameter item and the third shared intermediate parameter item is calculated to obtain a first secret sharing intermediate parameter, and the sum of the second shared intermediate parameter item and the fourth shared intermediate parameter item is calculated to obtain a second secret sharing intermediate parameter, wherein the calculation expression of the first secret sharing intermediate parameter is as follows:
其中,为所述第一秘密共享中间参数,另外地,所述第二秘密共享中间参数的计算表达式如下所示:in, is the first secret sharing intermediate parameter. In addition, the calculation expression of the second secret sharing intermediate parameter is as follows:
其中,[[f(XA)]]为所述第一秘密共享中间参数,另外地,第二设备将计算所述第二方第一共享中间参数项和所述第二方第三共享中间参数项之和,获得第二方第一秘密共享中间参数,并计算所述第二方第二共享中间参数项和所述第二方第四共享中间参数项之和,获得第二方第二秘密共享中间参数。Among them, [[f(X A )]] is the first secret shared intermediate parameter. In addition, the second device will calculate the sum of the second party's first shared intermediate parameter item and the second party's third shared intermediate parameter item to obtain the second party's first secret shared intermediate parameter, and calculate the sum of the second party's second shared intermediate parameter item and the second party's fourth shared intermediate parameter item to obtain the second party's second secret shared intermediate parameter.
步骤S23,将所述秘密共享中间参数和所述秘密共享标签数据代入预设回归误差计算公式,计算所述秘密共享回归误差。Step S23, substituting the secret sharing intermediate parameter and the secret sharing label data into a preset regression error calculation formula to calculate the secret sharing regression error.
在本实施例中,需要说明的是,所述秘密共享标签数据为秘密共享后第一设备拥有的部分份额的样本标签,所述第二设备拥有第二方秘密共享标签数据。In this embodiment, it should be noted that the secret shared label data is a portion of the sample labels owned by the first device after secret sharing, and the second device owns the second party secret shared label data.
将所述秘密共享中间参数和所述秘密共享标签数据代入预设回归误差计算公式,计算所述秘密共享回归误差,具体地,将所述第一秘密共享中间参数、第二秘密共享中间参数和所述秘密共享标签数据代入预设回归误差计算公式,计算所述秘密共享回归误差,其中,所述预设回归误差计算公式如下所示:Substituting the secret sharing intermediate parameter and the secret sharing label data into a preset regression error calculation formula to calculate the secret sharing regression error, specifically, substituting the first secret sharing intermediate parameter, the second secret sharing intermediate parameter and the secret sharing label data into a preset regression error calculation formula to calculate the secret sharing regression error, wherein the preset regression error calculation formula is as follows:
其中,Y为所述样本标签,相同地,第二设备将所述第二方第一秘密共享中间参数、第二方第二秘密共享中间参数和第二方秘密共享标签数据代入所述预设回归误差计算公式,即可计算第二方秘密共享回归误差。Among them, Y is the sample label. Similarly, the second device substitutes the second party first secret sharing intermediate parameter, the second party second secret sharing intermediate parameter and the second party secret sharing label data into the preset regression error calculation formula to calculate the second party secret sharing regression error.
步骤S30,基于所述秘密共享回归误差,确定第一目标回归模型参数,并协助所述第二设备确定第二目标回归模型参数,以构建纵向联邦因子分解机回归模型。Step S30, based on the secret shared regression error, determine the first target regression model parameters, and assist the second device in determining the second target regression model parameters to construct a longitudinal federated factorization machine regression model.
在本实施例中,基于所述秘密共享回归误差,确定第一目标回归模型参数,并协助所述第二设备确定第二目标回归模型参数,以构建纵向联邦因子分解机回归模型,具体地,重复进行所述秘密共享回归误差的计算,以对所述秘密共享模型参数进行迭代更新,直至达到预设模型训练结束条件,获得第一秘密共享目标参数,相同地,所述第二设备将重复进行所述第二方秘密共享回归误差的计算,以对所述第二方秘密共享模型参数进行迭代更新,直至达到预设模型训练结束条件,获得第二秘密共享目标参数,进而接收第二设备发送的第二秘密共享目标参数中的第二共享第一方目标参数,并获取所述第一秘密共享目标参数中的第一共享第一方目标参数,计算所述第一共享第一方目标参数和所述第二共享第一方目标参数之和,获得所述第一目标回归模型参数,并将所述第一秘密共享目标参数中的第二共享第二方目标参数发送至第二设备,以供所述第二设备计算所述第二共享第二方目标参数和所述第二秘密共享目标参数中的第一共享第二方目标参数之和,获得第二目标回归模型参数,也即,确定了模型训练结束后的第一类型模型参数和第二类型模型参数,进而即可确定所述纵向联邦因子分解机回归模型。In this embodiment, based on the secret shared regression error, the first target regression model parameters are determined, and the second device is assisted in determining the second target regression model parameters to construct a longitudinal federated factor decomposition machine regression model. Specifically, the calculation of the secret shared regression error is repeated to iteratively update the secret shared model parameters until the preset model training end condition is reached to obtain the first secret shared target parameters. Similarly, the second device will repeat the calculation of the second party secret shared regression error to iteratively update the second party secret shared model parameters until the preset model training end condition is reached to obtain the second secret shared target parameters, and then receive the second secret shared target parameters sent by the second device. The second shared first-party target parameter in the first secret shared target parameter is obtained, and the first shared first-party target parameter in the first secret shared target parameter is obtained, the sum of the first shared first-party target parameter and the second shared first-party target parameter is calculated, and the first target regression model parameter is obtained, and the second shared second-party target parameter in the first secret shared target parameter is sent to the second device, so that the second device can calculate the second shared second-party target parameter and the sum of the first shared second-party target parameter in the second secret shared target parameter to obtain the second target regression model parameter, that is, the first type model parameters and the second type model parameters after the model training are determined, and then the vertical federated factor decomposition machine regression model can be determined.
另外地,需要说明的是,所述纵向联邦因子分解机回归模型包括用于进行个性化推荐的推荐模型,相比于现有的纵向联邦学习方法,本申请在基于纵向联邦学习,构建纵向联邦因子分解机回归模型时,无需同态加解密过程,减少了纵向联邦学习建模时的计算量,进而提高了构建纵向联邦因子分解机回归模型时的计算效率,且由于纵向联邦因子分解机回归模型为基于纵向联邦学习建模构建的,进而构建所述纵向联邦因子分解机回归模型时的训练样本的特征丰富度更高,进而所述纵向联邦因子分解机回归模型的模型性能将更佳,进而所述纵向联邦因子分解机回归模型作为推荐模型的个性化推荐效果将更好。In addition, it should be noted that the longitudinal federated factor decomposition machine regression model includes a recommendation model for personalized recommendation. Compared with the existing longitudinal federated learning method, the present application does not require a homomorphic encryption and decryption process when constructing a longitudinal federated factor decomposition machine regression model based on longitudinal federated learning, thereby reducing the amount of computation during longitudinal federated learning modeling, thereby improving the computational efficiency when constructing the longitudinal federated factor decomposition machine regression model. Moreover, since the longitudinal federated factor decomposition machine regression model is constructed based on longitudinal federated learning modeling, the feature richness of the training samples when constructing the longitudinal federated factor decomposition machine regression model is higher, and thus the model performance of the longitudinal federated factor decomposition machine regression model will be better, and thus the personalized recommendation effect of the longitudinal federated factor decomposition machine regression model as a recommendation model will be better.
本实施例提供了一种因子分解机回归模型构建方法,相比于现有技术采用的基于不加密的两方联邦学习方法或者同态加密的两方纵向联邦学习建模方法构建回归模型的技术手段,本实施例通过与第二设备进行秘密共享,获得秘密共享模型参数和秘密共享训练数据,进而基于所述秘密共享训练数据和所述秘密共享模型参数,与所述第二设备进行纵向联邦学习建模,计算秘密共享回归误差,进而基于所述秘密共享回归误差,对所述秘密共享模型参数进行更新,获得秘密共享回归模型更新参数,其中,在与第二设备进行交互时,发送或者接收的数据均为秘密共享数据,且无需第三方生成的公私密钥进行数据的加密,所有的数据传输过程均在参与纵向联邦学习建模的两方之间进行,保护了数据的隐私性,进而基于所述秘密共享回归模型更新参数,通过与所述第二设备进行解密交互,即可确定第一目标回归模型参数,并协助第二设备确定第二目标回归模型参数,即可完成纵向联邦因子分解机回归模型的构建,克服了现有技术中采用基于不加密的两方联邦学习方法或者同态加密的两方纵向联邦学习建模方法构建回归模型,导致无法保护纵向联邦学习建模的各参与方的数据隐私的技术缺陷,所以,解决了基于纵向联邦学习建模构建回归模型时无法保护各参与方的数据隐私的技术问题。The present embodiment provides a factorization machine regression model construction method. Compared with the technical means of constructing a regression model based on an unencrypted two-party federated learning method or a homomorphically encrypted two-party longitudinal federated learning modeling method adopted in the prior art, the present embodiment obtains secret sharing model parameters and secret sharing training data by performing secret sharing with a second device, and then performs longitudinal federated learning modeling with the second device based on the secret sharing training data and the secret sharing model parameters, calculates the secret sharing regression error, and then updates the secret sharing model parameters based on the secret sharing regression error to obtain the secret sharing regression model update parameters, wherein, when interacting with the second device, the data sent or received are all secret sharing data, and there is no need for public-private key pairs generated by a third party. The data is encrypted with a key, and all data transmission processes are carried out between the two parties participating in the vertical federated learning modeling, which protects the privacy of the data. Then, the parameters of the secret sharing regression model are updated based on the secret sharing regression model. The first target regression model parameters can be determined by decrypting interaction with the second device, and the second device is assisted in determining the second target regression model parameters, so as to complete the construction of the vertical federated factor decomposition machine regression model. This overcomes the technical defect of the prior art that the regression model is constructed based on an unencrypted two-party federated learning method or a homomorphic encrypted two-party vertical federated learning modeling method, which makes it impossible to protect the data privacy of each participant in the vertical federated learning modeling. Therefore, the technical problem that the data privacy of each participant cannot be protected when the regression model is constructed based on the vertical federated learning modeling is solved.
进一步地,参照图2,基于本申请中第一实施例,在本申请的另一实施例中,所述基于所述秘密共享回归误差,确定第一目标回归模型参数,并协助所述第二设备确定第二目标回归模型参数的步骤包括:Further, referring to FIG. 2 , based on the first embodiment of the present application, in another embodiment of the present application, the step of determining the first target regression model parameters based on the secret shared regression error and assisting the second device in determining the second target regression model parameters includes:
步骤S31,基于所述秘密共享回归误差,对所述秘密共享模型参数进行更新,获得所述秘密共享回归模型更新参数;Step S31, based on the secret sharing regression error, updating the secret sharing model parameters to obtain the secret sharing regression model update parameters;
在本实施例中,基于所述秘密共享回归误差,对所述秘密共享模型参数进行更新,获得所述秘密共享回归模型更新参数,具体地,基于所述秘密共享回归误差,计算所诉秘密共享模型参数对应的模型梯度信息,进而基于所述模型梯度信息,更新所述秘密共享模型参数,获得所述秘密共享回归模型更新参数。In this embodiment, based on the secret shared regression error, the secret shared model parameters are updated to obtain the secret shared regression model update parameters. Specifically, based on the secret shared regression error, the model gradient information corresponding to the secret shared model parameters is calculated, and then based on the model gradient information, the secret shared model parameters are updated to obtain the secret shared regression model update parameters.
其中,所述秘密共享模型参数包括第一秘密共享参数和第二秘密共享模型参数,所述秘密共享回归模型更新参数包括第一共享回归模型参数和第二共享回归模型参数,The secret sharing model parameters include a first secret sharing parameter and a second secret sharing model parameter, and the secret sharing regression model update parameters include a first shared regression model parameter and a second shared regression model parameter.
所述基于所述秘密共享回归误差,对所述秘密共享模型参数进行更新,获得所述秘密共享回归模型更新参数的步骤包括:The step of updating the secret sharing model parameters based on the secret sharing regression error to obtain the updated parameters of the secret sharing regression model comprises:
步骤S311,计算所述秘密共享回归误差关于所述第一秘密共享模型参数的第一梯度信息,并计算所述秘密共享回归误差关于所述第一共享第二类型模型参数的第二梯度信息;Step S311, calculating first gradient information of the secret shared regression error with respect to the first secret shared model parameter, and calculating second gradient information of the secret shared regression error with respect to the first shared second type model parameter;
在本实施例中,需要说明的是,所述第一秘密共享模型参数包括第一方第一类型模型参数的第一份额和第一方第二类型模型参数的第一份额,所述第一梯度信息包括第一类型梯度和第二类型梯度,其中,所述第一类型梯度为第一方第一类型模型参数的第一份额对应的秘密共享的梯度,所述第二类型梯度为第一方第二类型模型参数的第一份额中每一列向量的秘密共享的梯度集合。In this embodiment, it should be noted that the first secret sharing model parameter includes the first share of the first party's first type model parameter and the first share of the first party's second type model parameter, and the first gradient information includes the first type gradient and the second type gradient, wherein the first type gradient is the gradient of the secret sharing corresponding to the first share of the first party's first type model parameter, and the second type gradient is the gradient set of the secret sharing of each column vector in the first share of the first party's second type model parameter.
另外地,需要说明的是,所述第二秘密共享模型参数包括第二方第一类型模型参数的第二份额和所述第二方第二类型模型参数的第二份额,所述第二梯度信息包括第三类型梯度和第四类型梯度,其中,所述第三类型梯度为第二方第一类型模型参数的第二份额对应的秘密共享的梯度,所述第四类型梯度为所述第二方第二类型模型参数的第二份额中每一列向量的秘密共享的梯度集合。In addition, it should be noted that the second secret sharing model parameters include the second share of the first type model parameters of the second party and the second share of the second type model parameters of the second party, and the second gradient information includes a third type gradient and a fourth type gradient, wherein the third type gradient is the gradient of the secret sharing corresponding to the second share of the first type model parameters of the second party, and the fourth type gradient is the set of gradients of the secret sharing of each column vector in the second share of the second type model parameters of the second party.
计算所述秘密共享回归误差关于所述第一秘密共享模型参数的偏导数,获得第一梯度信息,具体地,计算所述秘密共享回归误差关于所述第一方第一类型模型参数的第一份额的偏导数,获得所述第一类型梯度,并计算所述秘密共享回归误差关于所述第一方第二类型模型参数的第一份额中的每一列向量的偏导数,获得第二类型梯度,其中,所述第一类型梯度的计算表达式如下所示:The partial derivative of the secret shared regression error with respect to the first secret shared model parameter is calculated to obtain the first gradient information. Specifically, the partial derivative of the secret shared regression error with respect to the first share of the first type model parameter of the first party is calculated to obtain the first type gradient, and the partial derivative of each column vector in the first share of the second type model parameter of the first party is calculated to obtain the second type gradient, wherein the calculation expression of the first type gradient is as follows:
其中,T1为所述第一类型梯度,α为超参数,其大小可自行设定,用于控制梯度的取值范围,wA为所述第一方第一类型模型参数,[[wA]]A为所述第一方第一类型模型参数的第一份额,另外地,所述第二类型梯度的计算表达式如下所示:Wherein, T 1 is the first type of gradient, α is a hyperparameter whose size can be set by oneself and is used to control the value range of the gradient, w A is the first-party first type model parameter, [[w A ]] A is the first share of the first-party first type model parameter, and in addition, the calculation expression of the second type of gradient is as follows:
其中,T2为所述第二类型梯度,α为超参数,其大小可自行设定,用于控制梯度的取值范围,VA为所述第一方第二类型模型参数,[[VA]]A为所述第一方第二类型模型参数的第一份额,为所述第一方第二类型模型参数的第一份额的列向量,进一步地,计算所述秘密共享回归误差关于所述第二方第一类型模型参数的第二份额的偏导数,获得所述第三类型梯度,并计算所述秘密共享回归误差关于所述第二方第二类型模型参数的第二份额中的每一列向量的偏导数,获得第四类型梯度,其中,所述第三类型梯度的计算表达式如下所示:Wherein, T 2 is the second type gradient, α is a hyperparameter whose size can be set by oneself and is used to control the value range of the gradient, VA is the first-party second type model parameter, [[ VA ]] A is the first share of the first-party second type model parameter, is a column vector of the first share of the second type model parameters of the first party, further, the partial derivative of the secret shared regression error with respect to the second share of the first type model parameters of the second party is calculated to obtain the third type gradient, and the partial derivative of the secret shared regression error with respect to each column vector in the second share of the second type model parameters of the second party is calculated to obtain the fourth type gradient, wherein the calculation expression of the third type gradient is as follows:
其中,T3为所述第三类型梯度,α为超参数,其大小可自行设定,用于控制梯度的取值范围,wB为所述第二方第一类型模型参数,[[wB]]A为所述第二方第一类型模型参数的第二份额,另外地,所述第四类型梯度的计算表达式如下所示:Wherein, T 3 is the third type of gradient, α is a hyperparameter whose size can be set voluntarily and is used to control the value range of the gradient, w B is the second party first type model parameter, [[w B ]] A is the second share of the second party first type model parameter, and in addition, the calculation expression of the fourth type of gradient is as follows:
其中,T4为所述第四类型梯度,α为超参数,其大小可自行设定,用于控制梯度的取值范围,VB为所述第二方第二类型模型参数,[[VB]]A为所述第二方第二类型模型参数的第二份额,为所述第二方第二类型模型参数的第二份额的列向量。Wherein, T4 is the fourth type of gradient, α is a hyperparameter whose size can be set voluntarily and is used to control the value range of the gradient, VB is the second type model parameter of the second party, [[ VB ]] A is the second share of the second type model parameter of the second party, A column vector of second shares of the second type model parameters of the second party.
另外地,需要说明的是,所述第二设备同样可计算所述第二方秘密共享回归误差关于所述第二方第一类型模型参数的第一份额的偏导数,获得第五类型梯度,并计算所述第二方秘密共享回归误差关于所述第二方第二类型模型参数的第一份额中的每一列向量的偏导数,获得第六类型梯度,进而计算所述第二方秘密共享回归误差关于所述第一方第一类型模型参数的第二份额的偏导数,获得所述第七类型梯度,并计算所述第二方秘密共享回归误差关于所述第一方第二类型模型参数的第二份额中的每一列向量的偏导数,获得第八类型梯度,其中,第二设备中计算梯度的方式与第一设备计算梯度的方式一致。In addition, it should be noted that the second device can also calculate the partial derivative of the second party's secret shared regression error with respect to the first share of the second party's first type model parameter to obtain the fifth type of gradient, and calculate the partial derivative of the second party's secret shared regression error with respect to each column vector in the first share of the second party's second type model parameter to obtain the sixth type of gradient, and then calculate the partial derivative of the second party's secret shared regression error with respect to the second share of the first party's first type model parameter to obtain the seventh type of gradient, and calculate the partial derivative of the second party's secret shared regression error with respect to each column vector in the second share of the first party's second type model parameter to obtain the eighth type of gradient, wherein the method of calculating the gradient in the second device is consistent with the method of calculating the gradient in the first device.
步骤S312,基于所述第一梯度信息和预设第一学习参数,更新所述第一秘密共享模型参数,直至满足预设联邦学习结束条件,获得所述第一共享回归模型参数;Step S312: based on the first gradient information and the preset first learning parameters, updating the first secret sharing model parameters until a preset federated learning end condition is met, thereby obtaining the first shared regression model parameters;
在本实施例中,需要说明的是,所述预设联邦学习结束条件包括损失函数收敛,达到预设迭代次数阀值等,所述预设第一学习参数包括第一学习率和第二学习率。In this embodiment, it should be noted that the preset federated learning termination conditions include convergence of the loss function, reaching a preset iteration number threshold, etc., and the preset first learning parameters include a first learning rate and a second learning rate.
基于所述第一梯度信息和预设第一学习参数,更新所述第一秘密共享模型参数,直至满足预设联邦学习结束条件,获得所述第一共享回归模型参数,具体地,计算所述第一类型梯度和所述第一学习率的乘积,获得第一梯度下降值,进而计算所述第一方第一类型模型参数的第一份额与所述第一梯度下降值的差值,获得第一更新参数,并计算所述第二类型梯度和所述第二学习率的乘积,获得第二梯度下降值,进而计算所述第一方第二类型模型参数的第一份额与所述第二梯度下降值的差值,获得第二更新参数,进而判断所述第一更新参数和所述第二更新参数是否满足预设联邦学习结束条件,若满足,则将所述第一更新参数和所述第二更新参数共同作为所述第一共享回归模型参数,若不满足,则重新计算梯度信息,以对所述第一秘密共享模型参数进行迭代更新,直至满足预设联邦学习结束条件,其中,计算所述第一更新参数的计算表达式如下所示:Based on the first gradient information and the preset first learning parameter, the first secret sharing model parameter is updated until the preset federated learning end condition is met, and the first shared regression model parameter is obtained. Specifically, the product of the first type of gradient and the first learning rate is calculated to obtain a first gradient descent value, and then the difference between the first share of the first type model parameter of the first party and the first gradient descent value is calculated to obtain a first update parameter, and the product of the second type of gradient and the second learning rate is calculated to obtain a second gradient descent value, and then the difference between the first share of the second type model parameter of the first party and the second gradient descent value is calculated to obtain a second update parameter, and then it is determined whether the first update parameter and the second update parameter meet the preset federated learning end condition. If so, the first update parameter and the second update parameter are used together as the first shared regression model parameter. If not, the gradient information is recalculated to iteratively update the first secret sharing model parameter until the preset federated learning end condition is met. The calculation expression for calculating the first update parameter is as follows:
其中,δ1为所述第一学习率,为所述第一更新参数,另外地,计算所述第二更新参数的计算表达式如下所示:Among them, δ 1 is the first learning rate, is the first update parameter. Additionally, the calculation expression for calculating the second update parameter is as follows:
其中,δ2为所述第二学习率,为所述第二更新参数。Wherein, δ 2 is the second learning rate, is the second update parameter.
步骤S313,基于所述第二梯度信息和预设第二学习参数,更新所述第二秘密共享模型参数,直至满足所述预设联邦学习结束条件,获得所述第二共享回归模型参数。Step S313: Based on the second gradient information and the preset second learning parameters, the second secret sharing model parameters are updated until the preset federated learning end condition is met, thereby obtaining the second shared regression model parameters.
在本实施例中,需要说明的是,所述预设第二学习参数包括第三学习率和第四学习率。In this embodiment, it should be noted that the preset second learning parameter includes a third learning rate and a fourth learning rate.
基于所述第二梯度信息和预设第二学习参数,更新所述第二秘密共享模型参数,直至满足所述预设联邦学习结束条件,获得所述第二共享回归模型参数。具体地,计算所述第三类型梯度和所述第三学习率的乘积,获得第三梯度下降值,进而计算所述第二方第一类型模型参数的第二份额与所述第三梯度下降值的差值,获得第三更新参数,并计算所述第四类型梯度和所述第四学习率的乘积,获得第四梯度下降值,进而计算所述第二方第二类型模型参数的第二份额与所述第四梯度下降值的差值,获得第四更新参数,进而判断所述第三更新参数和所述第四更新参数是否满足预设联邦学习结束条件,若满足,则将所述第三更新参数和所述第四更新参数共同作为所述第二共享回归模型参数,若不满足,则重新计算梯度信息,以对所述第二秘密共享模型参数进行迭代更新,直至满足预设联邦学习结束条件,其中,计算所述第三更新参数的计算表达式如下所示:Based on the second gradient information and the preset second learning parameter, the second secret sharing model parameter is updated until the preset federated learning end condition is met, and the second shared regression model parameter is obtained. Specifically, the product of the third type gradient and the third learning rate is calculated to obtain a third gradient descent value, and then the difference between the second share of the first type model parameter of the second party and the third gradient descent value is calculated to obtain a third update parameter, and the product of the fourth type gradient and the fourth learning rate is calculated to obtain a fourth gradient descent value, and then the difference between the second share of the second type model parameter of the second party and the fourth gradient descent value is calculated to obtain a fourth update parameter, and then it is determined whether the third update parameter and the fourth update parameter meet the preset federated learning end condition. If so, the third update parameter and the fourth update parameter are used together as the second shared regression model parameter. If not, the gradient information is recalculated to iteratively update the second secret sharing model parameter until the preset federated learning end condition is met, wherein the calculation expression for calculating the third update parameter is as follows:
其中,δ3为所述第三学习率,为所述第三更新参数,另外地,计算所述第四更新参数的计算表达式如下所示:Wherein, δ 3 is the third learning rate, is the third update parameter. Additionally, the calculation expression for calculating the fourth update parameter is as follows:
其中,δ4为所述第四学习率,为所述第四更新参数。Wherein, δ 4 is the fourth learning rate, is the fourth updated parameter.
另外地,需要说明的是,所述第二设备将基于第五类型梯度和预设第五学习率,计算第五更新参数基于第六类型梯度和预设第六学习率,计算第六更新参数基于第七类型梯度和预设第七学习率,计算第七更新参数基于第八类型梯度和预设第八学习率,计算第八更新参数其中,第二设备计算各梯度的方式与第一设备一致。In addition, it should be noted that the second device calculates the fifth update parameter based on the fifth type of gradient and the preset fifth learning rate Based on the sixth type of gradient and the preset sixth learning rate, calculate the sixth update parameter Based on the seventh type of gradient and the preset seventh learning rate, calculate the seventh update parameter Based on the eighth type of gradient and the preset eighth learning rate, calculate the eighth update parameter The second device calculates each gradient in the same manner as the first device.
步骤S32,基于所述秘密共享回归模型更新参数,通过与所述第二设备进行解密交互,确定所述第一目标回归模型参数,以供所述第二设备确定所述第二目标回归模型参数。Step S32: based on the secret shared regression model update parameters, the first target regression model parameters are determined by performing decryption interaction with the second device, so that the second device can determine the second target regression model parameters.
在本实施例中,基于所述秘密共享回归模型更新参数,通过与所述第二设备进行解密交互,确定所述第一目标回归模型参数,以供所述第二设备确定所述第二目标回归模型参数,具体地,接收第二设备发送的第七更新参数和第八更新参数,并基于所述第一更新参数、第二更新参数、第七更新参数和第八更新参数,计算第一目标回归模型参数,并将所述第三更新参数和所述第四更新参数发送至所述第二设备,以供所述第二设备基于所述三更新参数、所述第四更新参数、所述第五更新参数和所述第六更新参数,计算第二目标回归模型参数。In this embodiment, based on the secret shared regression model update parameters, the first target regression model parameters are determined by performing decryption interaction with the second device, so that the second device can determine the second target regression model parameters. Specifically, the seventh update parameter and the eighth update parameter sent by the second device are received, and the first target regression model parameters are calculated based on the first update parameter, the second update parameter, the seventh update parameter and the eighth update parameter, and the third update parameter and the fourth update parameter are sent to the second device, so that the second device can calculate the second target regression model parameters based on the three update parameters, the fourth update parameter, the fifth update parameter and the sixth update parameter.
其中,所述秘密共享回归模型更新参数包括第一方模型更新参数第一份额和第二方模型更新参数第二份额,The secret shared regression model update parameters include a first share of the first party model update parameters and a second share of the second party model update parameters.
所述基于所述秘密共享回归模型更新参数,通过与所述第二设备进行解密交互,确定所述第一目标回归模型参数,以供所述第二设备确定所述第二目标回归模型参数的步骤包括:The step of updating the parameters based on the secret shared regression model and determining the first target regression model parameters by performing decryption interaction with the second device so that the second device can determine the second target regression model parameters comprises:
步骤S321,接收所述第二设备基于纵向联邦学习建模确定的第一方模型更新参数第二份额,并将所述第二方模型更新参数第二份额至所述第二设备,以供所述第二设备基于纵向联邦学习建模确定的第二方模型更新参数第一份额和所述第二方模型更新参数第二份额,确定所述第二目标回归模型参数;Step S321, receiving the second share of the first-party model update parameters determined by the second device based on the longitudinal federated learning modeling, and transmitting the second share of the second-party model update parameters to the second device, so that the second device can determine the second target regression model parameters based on the first share of the second-party model update parameters and the second share of the second-party model update parameters determined by the longitudinal federated learning modeling;
在本实施例中,需要说明的是,所述第一方模型更新参数第一份额包括所述第一更新参数和所述第二更新参数,所述第二方模型更新参数第二份额包括第三更新参数和第四更新参数,所述第二方模型更新参数第一份额包括第五更新参数和第六更新参数,所述第一方模型更新参数第二份额包括所述第七更新参数和所述第八更新参数。In this embodiment, it should be noted that the first share of the first-party model update parameters includes the first update parameter and the second update parameter, the second share of the second-party model update parameters includes the third update parameter and the fourth update parameter, the first share of the second-party model update parameters includes the fifth update parameter and the sixth update parameter, and the second share of the first-party model update parameters includes the seventh update parameter and the eighth update parameter.
接收所述第二设备基于纵向联邦学习建模确定的第一方模型更新参数第二份额,并将所述第二方模型更新参数第二份额至所述第二设备,以供所述第二设备基于纵向联邦学习建模确定的第二方模型更新参数第一份额和所述第二方模型更新参数第二份额,确定所述第二目标回归模型参数,具体地,接收所述第二设备发送的第七更新参数和第八更新参数,并将第三更新参数和第四更新参数发送至所述第二设备,以供所述第二设备计算所述第三更新参数和所述第五更新参数之和,获得第二方第一类型模型更新参数,计算所述第四更新参数和所述第五更新参数之和,获得第二方第二类型模型更新参数,并将所述第二方第一类型模型更新参数和所述第二方第二类型模型更新参数共同作为所述第二目标回归模型参数,其中,所述第二方第一类型模型更新参数的计算表达式如下:Receive the second share of the first-party model update parameters determined by the second device based on the vertical federated learning modeling, and send the second share of the second-party model update parameters to the second device, so that the second device can determine the second target regression model parameters based on the first share of the second-party model update parameters and the second share of the second-party model update parameters determined by the second device based on the vertical federated learning modeling. Specifically, receive the seventh update parameter and the eighth update parameter sent by the second device, and send the third update parameter and the fourth update parameter to the second device, so that the second device can calculate the sum of the third update parameter and the fifth update parameter to obtain the second-party first-type model update parameters, calculate the sum of the fourth update parameter and the fifth update parameter to obtain the second-party second-type model update parameters, and use the second-party first-type model update parameters and the second-party second-type model update parameters together as the second target regression model parameters, wherein the calculation expression of the second-party first-type model update parameters is as follows:
其中,为所述第二方第一类型模型更新参数,另外地,所述第二方第二类型模型更新参数的计算表达式如下:in, is the second party first type model update parameter. In addition, the calculation expression of the second party second type model update parameter is as follows:
其中,为所述第二方第二类型模型更新参数。in, Update parameters for the second party second type model.
步骤S322,对所述第一方模型更新参数第一份额和所述第一方模型更新参数第二份额进行聚合,获得所述第一目标回归模型参数。Step S322: Aggregate the first share of the first-party model update parameters and the second share of the first-party model update parameters to obtain the first target regression model parameters.
在本实施例中,对所述第一方模型更新参数第一份额和所述第一方模型更新参数第二份额进行聚合,获得所述第一目标回归模型参数,具体地,计算所述第一更新参数和所述第七更新参数之和,获得第一方第一类型模型更新参数,并计算所述第二更新参数和所述第八更新参数之和,获得第一方第二类型模型更新参数,并将所述第一方第一类型模型更新参数和所述第一方第二类型模型更新参数共同作为所述第一目标回归模型参数,其中,所述第一方第一类型模型更新参数的计算表达式如下:In this embodiment, the first share of the first-party model update parameter and the second share of the first-party model update parameter are aggregated to obtain the first target regression model parameter. Specifically, the sum of the first update parameter and the seventh update parameter is calculated to obtain the first-party first-type model update parameter, and the sum of the second update parameter and the eighth update parameter is calculated to obtain the first-party second-type model update parameter, and the first-party first-type model update parameter and the first-party second-type model update parameter are used together as the first target regression model parameter, wherein the calculation expression of the first-party first-type model update parameter is as follows:
其中,为所述第一方第一类型模型更新参数,另外地,所述第一方第二类型模型更新参数的计算表达式如下:in, is the first party first type model update parameter, and in addition, the calculation expression of the first party second type model update parameter is as follows:
其中,为所述第一方第二类型模型更新参数。in, Update parameters for the first-party second-type model.
本实施例提供了一种基于秘密共享回归误差,更新纵向联邦因子分解机回归模型的模型参数的方法,也即,首先第一设备基于秘密共享回归误差,通过计算梯度的方法,更新秘密共享模型参数,获得本轮迭代的秘密共享回归模型更新参数,且第二设备同时基于第二方秘密共享回归误差,更新第二方秘密共享模型参数,获得本轮迭代的第二方秘密共享回归模型更新参数,直至达到预设联邦学习结束条件,则基于秘密共享机制,第一设备与第二设备进行解密交互,第一设备基于秘密共享回归模型更新参数,协助第二设备基于所述第二方秘密共享回归模型更新参数,确定第二目标回归模型参数,同时第二设备基于第二方秘密共享回归模型更新参数,协助第一设备基于所述秘密共享回归模型更新参数,确定第一目标回归模型参数,进而即可完成纵向联邦因子分解机回归模型的构建,进而为克服现有技术中采用基于不加密的两方联邦学习方法或者同态加密的两方纵向联邦学习建模方法构建回归模型,导致无法保护纵向联邦学习建模的各参与方的数据隐私的技术缺陷奠定了基础。This embodiment provides a method for updating model parameters of a vertical federated factor decomposition machine regression model based on a secret sharing regression error, that is, first, the first device updates the secret sharing model parameters based on the secret sharing regression error by calculating the gradient method to obtain the updated parameters of the secret sharing regression model of this round of iteration, and the second device simultaneously updates the second party secret sharing model parameters based on the second party secret sharing regression error to obtain the updated parameters of the second party secret sharing regression model of this round of iteration, until the preset federated learning end condition is reached, then based on the secret sharing mechanism, the first device and the second device perform decryption interaction, the first device updates the parameters based on the secret sharing regression model, assists the second device to update the parameters based on the second party secret sharing regression model, and determines the second target regression model parameters, and at the same time, the second device updates the parameters based on the second party secret sharing regression model, assists the first device to update the parameters based on the secret sharing regression model, and determines the first target regression model parameters, thereby completing the construction of the vertical federated factor decomposition machine regression model, thereby laying a foundation for overcoming the technical defect in the prior art that a regression model is constructed by using an unencrypted two-party federated learning method or a homomorphically encrypted two-party vertical federated learning modeling method, which results in the inability to protect the data privacy of each participant in the vertical federated learning modeling.
进一步地,参照图3,基于本申请中第一实施例和第二实施例,在本申请的另一实施例中,所述个性化推荐方法应用于第一设备,所述个性化推荐方法包括:Further, referring to FIG. 3 , based on the first embodiment and the second embodiment of the present application, in another embodiment of the present application, the personalized recommendation method is applied to a first device, and the personalized recommendation method includes:
步骤A10,与第二设备进行秘密共享,获得秘密共享待推荐用户数据和秘密共享模型参数;Step A10, performing secret sharing with the second device to obtain secret sharing recommended user data and secret sharing model parameters;
在本实施例中,需要说明的是,所述第一设备和所述第二设备均为纵向联邦学习的参与方,且在进行秘密共享之前,第一设备和第二设备基于秘密共享和纵向联邦学习已经训练好了预设评分模型,其中,所述预设评分模型为训练好了的因子分解机回归模型,用于预测用户对应物品的评分,进而所述预设评分模型的模型表达式如下所示:In this embodiment, it should be noted that the first device and the second device are both participants in vertical federated learning, and before secret sharing, the first device and the second device have trained a preset scoring model based on secret sharing and vertical federated learning, wherein the preset scoring model is a trained factor decomposition machine regression model, which is used to predict the score of the user's corresponding item, and then the model expression of the preset scoring model is as follows:
z(x)=<w,x>+∑i<j<Vi,Vj>xixj z(x)=<w, x>+∑ i<j <V i , V j >x i x j
其中,x为模型输入数据,w和V为模型参数,z(x)为模型输出,也即为用户对物品的评分。Among them, x is the model input data, w and V are model parameters, and z(x) is the model output, which is the user's rating of the item.
与第二设备进行秘密共享,获得秘密共享待推荐用户数据和秘密共享模型参数,具体地,获取预设评分模型的第一方评分模型参数和第一方待推荐用户数据,其中,所述待推荐数据为待推荐用户的关联数据,例如,待推荐用户的兴趣爱好和待推荐用户对物品的历史评分数据等,同时第二设备获取预设评分模型的第二方评分模型参数和第二方待推荐用户数据,其中,由于预设评分模型为基于纵向联邦学习构建的,所以第一设备持有的预设评分模型的一部分模型参数为第一方评分模型参数,第二设备持有的预设评分模型的一部分模型参数为第二方评分模型参数,所述第一方待推荐用户数据为第一设备收集的待推荐用户的关联数据,所述第二方待推荐用户数据为第二设备收集的待推荐用户对物品的关联数据,其中,所述第一方待推荐用户数据和所述第二方待推荐用户数据均可用向量进行表示,例如,假设所述第一方待推荐用户数据为向量(1,0,1,0),其中,编码1表示用户点击了对应的物品,编码0表示用户未点击对应的物品,则向量(1,0,1,0)表示用户点击了物品A和物品C,未点击物品B和物品D,进一步地,基于所述第一方评分模型参数和第一方待推荐用户数据,与所述第二设备进行秘密共享,其中,所述第二设备在秘密共享中提供所述第二方评分模型参数和第二方待推荐用户数据,进而第一设备获得秘密共享模型参数和秘密共享待推荐用户数据,第二设备获得第二方秘密共享模型参数和第二方秘密共享待推荐用户数据,其中,所述秘密共享模型参数包括第一共享第一方模型参数和第一共享第二方模型参数,所述秘密共享待推荐用户数据包括第一共享第一方待推荐用户数据和第一共享第二方待推荐用户数据,所述第二方秘密共享模型参数包括第二共享第一方模型参数和第二共享第二方模型参数,所述第二方秘密共享待推荐用户数据包括第二共享第一方待推荐用户数据和第二共享第二方待推荐用户数据,其中,所述第一共享第一方模型参数为所述第一方评分模型参数的第一份额,所述第二共享第一方模型参数为所述第一方评分模型参数的第二份额,所述第一共享第二方模型参数为所述第二评分模型参数的第一份额,所述第二共享第二方模型参数为所述第二评分模型参数的第二份额,所述第一共享第一方待推荐用户数据为所述第一方待推荐用户数据的第一份额,所述第二共享第一方待推荐用户数据为所述第一方待推荐用户数据的第二份额,所述第一共享第二方待推荐用户数据为所述第二方待推荐用户数据的第一份额,所述第二共享第二方待推荐用户数据为所述第二方待推荐用户数据的第二份额。The method comprises the following steps: performing secret sharing with the second device to obtain secret shared user data to be recommended and secret shared model parameters, specifically, obtaining first-party scoring model parameters of a preset scoring model and first-party user data to be recommended, wherein the data to be recommended is associated data of the user to be recommended, for example, the hobbies of the user to be recommended and the historical scoring data of the user to be recommended on items, etc., and at the same time, the second device obtains second-party scoring model parameters of the preset scoring model and second-party user data to be recommended, wherein, since the preset scoring model is constructed based on longitudinal federated learning, part of the model parameters of the preset scoring model held by the first device are first-party scoring model parameters, part of the model parameters of the preset scoring model held by the second device are second-party scoring model parameters, and the first-party user data to be recommended is obtained. The first-party user data to be recommended is the association data of the user to be recommended collected by the first device, and the second-party user data to be recommended is the association data of the user to be recommended to the item collected by the second device, wherein the first-party user data to be recommended and the second-party user data to be recommended can be represented by vectors. For example, assuming that the first-party user data to be recommended is a vector (1, 0, 1, 0), wherein the code 1 indicates that the user clicks on the corresponding item, and the code 0 indicates that the user does not click on the corresponding item, then the vector (1, 0, 1, 0) indicates that the user clicks on item A and item C, but does not click on item B and item D. Furthermore, based on the first-party scoring model parameters and the first-party user data to be recommended, secret sharing is performed with the second device, wherein the second device provides the first-party scoring model parameters in the secret sharing. The two-party scoring model parameters and the second-party user data to be recommended, and then the first device obtains the secret sharing model parameters and the secret sharing user data to be recommended, and the second device obtains the second-party secret sharing model parameters and the second-party secret sharing user data to be recommended, wherein the secret sharing model parameters include the first shared first-party model parameters and the first shared second-party model parameters, the secret sharing user data to be recommended includes the first shared first-party user data to be recommended and the first shared second-party user data to be recommended, the second-party secret sharing model parameters include the second shared first-party model parameters and the second shared second-party model parameters, the second-party secret sharing user data to be recommended includes the second shared first-party user data to be recommended and the second shared second-party user data to be recommended, wherein the first The first shared first-party model parameter is the first share of the first-party scoring model parameter, the second shared first-party model parameter is the second share of the first-party scoring model parameter, the first shared second-party model parameter is the first share of the second scoring model parameter, the second shared second-party model parameter is the second share of the second scoring model parameter, the first shared first-party user data to be recommended is the first share of the first-party user data to be recommended, the second shared first-party user data to be recommended is the second share of the first-party user data to be recommended, the first shared second-party user data to be recommended is the first share of the second-party user data to be recommended, and the second shared second-party user data to be recommended is the second share of the second-party user data to be recommended.
步骤A20,将所述秘密共享待推荐用户数据输入预设评分模型,以基于所述秘密共享模型参数,对所述秘密共享待推荐用户数据对应的待推荐物品进行评分,获得第一秘密共享评分结果;Step A20, inputting the secret sharing user data to be recommended into a preset scoring model, so as to score the recommended items corresponding to the secret sharing user data to be recommended based on the secret sharing model parameters, and obtaining a first secret sharing scoring result;
在本实施例中,将所述秘密共享待推荐用户数据输入预设评分模型,以基于所述秘密共享模型参数,对所述秘密共享待推荐用户数据对应的待推荐物品进行评分,获得第一秘密共享评分结果,具体地,将所述第一共享第一方待推荐用户数据和所述第一共享第二方待推荐用户数据分别输入所述预设评分模型,以将所述第一共享第一方待推荐用户数据和所述第一共享第一方模型参数代入所述预设评分模型的模型表达式,通过秘密共享乘法计算第一共享第一方评分,并以将所述第一共享第二方待推荐用户数据和所述第一共享第二方模型参数代入所述预设评分模型的模型表达式,通过秘密共享乘法计算第一共享第二方评分,并将所述第一共享第一方评分和所述第一共享第二方评分作为所述第一秘密共享评分结果,其中,所述第一共享第一方评分和所述第一共享第二方评分均为模型输出值,相同地,第二设备将基于所述第二共享第一方待推荐用户数据和所述第二共享第一方模型参数,通过秘密共享乘法计算第二共享第一方评分,并基于所述第二共享第二方待推荐用户数据和第二共享第二方模型参数,通过秘密共享乘法计算第二共享第二方评分。In this embodiment, the secret shared user data to be recommended is input into a preset scoring model, so as to score the recommended items corresponding to the secret shared user data to be recommended based on the secret shared model parameters, and obtain a first secret shared scoring result. Specifically, the first shared first-party user data to be recommended and the first shared second-party user data to be recommended are respectively input into the preset scoring model, so as to substitute the first shared first-party user data to be recommended and the first shared first-party model parameters into the model expression of the preset scoring model, calculate the first shared first-party score through secret shared multiplication, and substitute the first shared second-party user data to be recommended and the first shared first-party model parameters into the model expression of the preset scoring model, calculate the first shared first-party score through secret shared multiplication, and obtain the first secret shared first-party score by substituting the first shared second-party user data to be recommended and the first shared second-party model parameters into the model expression of the preset scoring model. The two-party model parameters are substituted into the model expression of the preset scoring model, the first shared second-party score is calculated by secret shared multiplication, and the first shared first-party score and the first shared second-party score are used as the first secret shared scoring result, wherein the first shared first-party score and the first shared second-party score are both model output values, and similarly, the second device will calculate the second shared first-party score by secret shared multiplication based on the second shared first-party user data to be recommended and the second shared first-party model parameters, and calculate the second shared second-party score by secret shared multiplication based on the second shared second-party user data to be recommended and the second shared second party model parameters.
步骤A30,基于所述第一秘密共享评分结果,与所述第二设备进行联邦交互,以联合所述第二设备确定的第二秘密共享评分结果,计算目标评分。Step A30: Based on the first secret sharing scoring result, perform a federation interaction with the second device to calculate a target score in conjunction with the second secret sharing scoring result determined by the second device.
在本实施例中,基于所述第一秘密共享评分结果,与所述第二设备进行联邦交互,以联合所述第二设备确定的第二秘密共享评分结果,计算目标评分,具体地,基于所述第一秘密共享评分结果,与所述第二设备进行联邦交互,以聚合所述第一秘密共享评分结果和第二秘密共享评分结果,获得目标评分,其中,需要说明的是,所述目标评分为预设评分模型计算的用户对待推荐物品的评分。In this embodiment, based on the first secret sharing scoring result, a federal interaction is performed with the second device to calculate a target score in combination with the second secret sharing scoring result determined by the second device. Specifically, based on the first secret sharing scoring result, a federal interaction is performed with the second device to aggregate the first secret sharing scoring result and the second secret sharing scoring result to obtain a target score, wherein it should be noted that the target score is the user's score for the recommended item calculated by a preset scoring model.
其中,所述第一秘密共享个性化推荐结果包括第一共享第一方评分和第一共享第二方评分,所述第二秘密共享个性化推荐结果包括第二共享第一方评分和第二共享第二方评分,The first secret-sharing personalized recommendation result includes a first shared first-party score and a first shared second-party score, and the second secret-sharing personalized recommendation result includes a second shared first-party score and a second shared second-party score.
所述基于所述第一秘密共享评分结果,与所述第二设备进行联邦交互,以联合所述第二设备确定的第二秘密共享评分结果,计算目标评分的步骤包括:The step of performing federated interaction with the second device based on the first secret sharing scoring result to calculate the target score in conjunction with the second secret sharing scoring result determined by the second device comprises:
步骤A31,接收所述第二设备发送的所述第二共享第一方评分和所述第二共享第二方评分;Step A31, receiving the second shared first-party score and the second shared second-party score sent by the second device;
步骤A32,基于所述第一共享第一方评分和所述第二共享第一方评分,计算第一方评分;Step A32, calculating a first-party score based on the first shared first-party score and the second shared first-party score;
在本实施例中,基于所述第一共享第一方评分和所述第二共享第一方评分,计算第一方评分,具体地,计算所述第一共享第一方评分和所述第二共享第一方评分的和,获得所述第一方评分。In this embodiment, the first party score is calculated based on the first shared first party score and the second shared first party score. Specifically, the sum of the first shared first party score and the second shared first party score is calculated to obtain the first party score.
步骤A33,基于所述第一共享第二方评分和所述第二共享第二方评分,计算第二方评分;Step A33, calculating a second-party score based on the first shared second-party score and the second shared second-party score;
在本实施例汇总,基于所述第一共享第二方评分和所述第二共享第二方评分,计算第二方评分,具体地,计算所述第一共享第二方评分和所述第二共享第二方评分的和,获得第二方评分。In this embodiment, the second party score is calculated based on the first shared second party score and the second shared second party score. Specifically, the sum of the first shared second party score and the second shared second party score is calculated to obtain the second party score.
步骤A34,对所述第一方评分和所述第二方评分进行聚合,获得所述目标评分。Step A34, aggregating the first-party score and the second-party score to obtain the target score.
在本实施例中,对所述第一方评分和所述第二方评分进行聚合,获得所述目标评分,具体地,基于预设聚合规则,对所述第一方评分和所述第二方评分进行聚合,获得所述目标评分,其中,所述预设聚合规则包括求和和加权求平均等。In this embodiment, the first-party score and the second-party score are aggregated to obtain the target score. Specifically, based on preset aggregation rules, the first-party score and the second-party score are aggregated to obtain the target score, wherein the preset aggregation rules include summation and weighted averaging, etc.
步骤A40,基于所述目标评分,生成所述待推荐物品对应的目标推荐列表。Step A40: generating a target recommendation list corresponding to the items to be recommended based on the target scores.
在本实施例中,基于所述目标评分,生成所述待推荐物品对应的目标推荐列表,具体地,重复进行所述目标评分的获取,获得不同目标用户对同一待推荐物品的目标评分,进而基于各所述目标评分的大小,对各目标用户进行排序,生成所述待推荐物品的推荐用户列表,并将所述推荐用户列表作为所述目标推荐列表,In this embodiment, based on the target score, a target recommendation list corresponding to the to-be-recommended item is generated. Specifically, the target score is repeatedly obtained to obtain target scores of different target users for the same to-be-recommended item, and then the target users are sorted based on the size of each target score to generate a recommended user list of the to-be-recommended item, and the recommended user list is used as the target recommendation list.
在另一种可实施的方案中,基于所述目标评分,生成所述秘密共享待推荐用户数据对应的待推荐用户对应的目标推荐列表,具体地,重复进行所述目标评分的获取,获得同一目标用户对不同待推荐物品的目标评分,进而基于各所述目标评分的大小,对各待推荐物品进行排序,生成所述待推荐用户的推荐物品列表,并将所述推荐物品列表作为所述目标推荐列表。In another feasible scheme, based on the target score, a target recommendation list corresponding to the user to be recommended corresponding to the secret shared user data to be recommended is generated. Specifically, the target score is repeatedly obtained to obtain the target score of the same target user for different items to be recommended, and then the items to be recommended are sorted based on the size of each target score to generate a recommended item list for the user to be recommended, and the recommended item list is used as the target recommendation list.
本实施例提供了一种基于秘密共享和纵向联邦学习进行个性化推荐的方法,与第二设备进行秘密共享,获得秘密共享待推荐用户数据和秘密共享模型参数,进而将所述秘密共享待推荐用户数据输入预设评分模型,以基于所述秘密共享模型参数,对所述秘密共享待推荐用户数据对应的待推荐物品进行评分,获得第一秘密共享评分结果,进而基于所述第一秘密共享评分结果,与所述第二设备进行联邦交互,以联合所述第二设备确定的第二秘密共享评分结果,计算目标评分,进而基于所述目标评分,生成所述待推荐物品对应的目标推荐列表,其中,在个性化推荐过程中第一设备与第二设备在进行交互时,发送或者接收的数据均为秘密共享数据,无需第三方生成的公私密钥进行数据的加密,所有的数据传输过程均在参与纵向联邦学习的两方之间进行,保护了数据的隐私性的同时,减少了对数据进行复杂的加密和解密的计算过程,且由于进行秘密共享过程和进行秘密共享对应的解密过程时,均只需进行简单的数学运算过程,降低了计算复杂度,进而提高了因子分解机分类模型进行个性化推荐时的计算效率。This embodiment provides a method for personalized recommendation based on secret sharing and vertical federated learning, wherein secret sharing is performed with a second device to obtain secret shared user data to be recommended and secret sharing model parameters, and then the secret shared user data to be recommended is input into a preset scoring model to score the recommended items corresponding to the secret shared user data to be recommended based on the secret sharing model parameters to obtain a first secret sharing scoring result, and then based on the first secret sharing scoring result, a federated interaction is performed with the second device to calculate a target score in conjunction with a second secret sharing scoring result determined by the second device, and then based on the target score, a target recommendation list corresponding to the recommended items is generated. In the personalized recommendation process, when the first device and the second device interact, the data sent or received are all secret shared data, and there is no need to encrypt the data with a public-private key generated by a third party. All data transmission processes are performed between the two parties participating in the vertical federated learning, which protects the privacy of the data while reducing the complex encryption and decryption calculation process of the data. Moreover, since only simple mathematical operations are required during the secret sharing process and the decryption process corresponding to the secret sharing, the calculation complexity is reduced, thereby improving the calculation efficiency of the factor decomposition machine classification model for personalized recommendation.
参照图4,图4是本申请实施例方案涉及的硬件运行环境的设备结构示意图。Refer to Figure 4, which is a schematic diagram of the device structure of the hardware operating environment involved in the embodiment of the present application.
如图4所示,该因子分解机回归模型构建设备可以包括:处理器1001,例如CPU,存储器1005,通信总线1002。其中,通信总线1002用于实现处理器1001和存储器1005之间的连接通信。存储器1005可以是高速RAM存储器,也可以是稳定的存储器(non-volatilememory),例如磁盘存储器。存储器1005可选的还可以是独立于前述处理器1001的存储设备。As shown in FIG4 , the factor decomposition machine regression model building device may include: a processor 1001, such as a CPU, a memory 1005, and a communication bus 1002. The communication bus 1002 is used to realize the connection and communication between the processor 1001 and the memory 1005. The memory 1005 may be a high-speed RAM memory or a stable memory (non-volatile memory), such as a disk memory. The memory 1005 may optionally be a storage device independent of the aforementioned processor 1001.
可选地,该因子分解机回归模型构建设备还可以包括矩形用户接口、网络接口、摄像头、RF(Radio Frequency,射频)电路,传感器、音频电路、WiFi模块等等。矩形用户接口可以包括显示屏(Display)、输入子模块比如键盘(Keyboard),可选矩形用户接口还可以包括标准的有线接口、无线接口。网络接口可选的可以包括标准的有线接口、无线接口(如WI-FI接口)。Optionally, the factor decomposition machine regression model building device may also include a rectangular user interface, a network interface, a camera, an RF (Radio Frequency) circuit, a sensor, an audio circuit, a WiFi module, etc. The rectangular user interface may include a display screen (Display), an input submodule such as a keyboard (Keyboard), and the optional rectangular user interface may also include a standard wired interface and a wireless interface. The network interface may optionally include a standard wired interface and a wireless interface (such as a WI-FI interface).
本领域技术人员可以理解,图4中示出的因子分解机回归模型构建设备结构并不构成对因子分解机回归模型构建设备的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。Those skilled in the art will appreciate that the factor decomposition machine regression model building device structure shown in FIG4 does not constitute a limitation on the factor decomposition machine regression model building device, and may include more or fewer components than shown in the figure, or a combination of certain components, or a different arrangement of components.
如图4所示,作为一种计算机存储介质的存储器1005中可以包括操作系统、网络通信模块以及因子分解机回归模型构建程序。操作系统是管理和控制因子分解机回归模型构建设备硬件和软件资源的程序,支持因子分解机回归模型构建程序以及其它软件和/或程序的运行。网络通信模块用于实现存储器1005内部各组件之间的通信,以及与因子分解机回归模型构建系统中其它硬件和软件之间通信。As shown in FIG4 , the memory 1005 as a computer storage medium may include an operating system, a network communication module, and a factor decomposition machine regression model construction program. The operating system is a program that manages and controls the hardware and software resources of the factor decomposition machine regression model construction device, and supports the operation of the factor decomposition machine regression model construction program and other software and/or programs. The network communication module is used to realize the communication between the components inside the memory 1005, and to communicate with other hardware and software in the factor decomposition machine regression model construction system.
在图4所示的因子分解机回归模型构建设备中,处理器1001用于执行存储器1005中存储的因子分解机回归模型构建程序,实现上述任一项所述的因子分解机回归模型构建方法的步骤。In the factor decomposition machine regression model building device shown in FIG4 , the processor 1001 is used to execute the factor decomposition machine regression model building program stored in the memory 1005 to implement the steps of any of the above-mentioned factor decomposition machine regression model building methods.
本申请因子分解机回归模型构建设备具体实施方式与上述因子分解机回归模型构建方法各实施例基本相同,在此不再赘述。The specific implementation of the factor decomposition machine regression model construction device of the present application is basically the same as the various embodiments of the factor decomposition machine regression model construction method described above, and will not be repeated here.
参照图5,图5是本申请实施例方案涉及的硬件运行环境的设备结构示意图。Refer to Figure 5, which is a schematic diagram of the device structure of the hardware operating environment involved in the embodiment of the present application.
如图5所示,该个性化推荐设备可以包括:处理器1001,例如CPU,存储器1005,通信总线1002。其中,通信总线1002用于实现处理器1001和存储器1005之间的连接通信。存储器1005可以是高速RAM存储器,也可以是稳定的存储器(non-volatile memory),例如磁盘存储器。存储器1005可选的还可以是独立于前述处理器1001的存储设备。As shown in FIG5 , the personalized recommendation device may include: a processor 1001, such as a CPU, a memory 1005, and a communication bus 1002. The communication bus 1002 is used to realize the connection and communication between the processor 1001 and the memory 1005. The memory 1005 may be a high-speed RAM memory or a stable memory (non-volatile memory), such as a disk memory. The memory 1005 may optionally be a storage device independent of the aforementioned processor 1001.
可选地,该个性化推荐设备还可以包括矩形用户接口、网络接口、摄像头、RF(Radio Frequency,射频)电路,传感器、音频电路、WiFi模块等等。矩形用户接口可以包括显示屏(Display)、输入子模块比如键盘(Keyboard),可选矩形用户接口还可以包括标准的有线接口、无线接口。网络接口可选的可以包括标准的有线接口、无线接口(如WI-FI接口)。Optionally, the personalized recommendation device may also include a rectangular user interface, a network interface, a camera, an RF (Radio Frequency) circuit, a sensor, an audio circuit, a WiFi module, etc. The rectangular user interface may include a display screen (Display), an input submodule such as a keyboard (Keyboard), and the optional rectangular user interface may also include a standard wired interface and a wireless interface. The network interface may optionally include a standard wired interface and a wireless interface (such as a WI-FI interface).
本领域技术人员可以理解,图5中示出的个性化推荐设备结构并不构成对个性化推荐设备的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。Those skilled in the art will appreciate that the personalized recommendation device structure shown in FIG. 5 does not constitute a limitation on the personalized recommendation device, and may include more or fewer components than shown in the figure, or a combination of certain components, or a different arrangement of components.
如图5所示,作为一种计算机存储介质的存储器1005中可以包括操作系统、网络通信模块以及个性化推荐程序。操作系统是管理和控制个性化推荐设备硬件和软件资源的程序,支持个性化推荐程序以及其它软件和/或程序的运行。网络通信模块用于实现存储器1005内部各组件之间的通信,以及与个性化推荐系统中其它硬件和软件之间通信。As shown in FIG5 , the memory 1005 as a computer storage medium may include an operating system, a network communication module, and a personalized recommendation program. The operating system is a program that manages and controls the hardware and software resources of the personalized recommendation device, and supports the operation of the personalized recommendation program and other software and/or programs. The network communication module is used to realize the communication between the components inside the memory 1005, and the communication with other hardware and software in the personalized recommendation system.
在图5所示的个性化推荐设备中,处理器1001用于执行存储器1005中存储的个性化推荐程序,实现上述任一项所述的个性化推荐方法的步骤。In the personalized recommendation device shown in FIG. 5 , the processor 1001 is used to execute the personalized recommendation program stored in the memory 1005 to implement the steps of any of the personalized recommendation methods described above.
本申请个性化推荐设备具体实施方式与上述个性化推荐方法各实施例基本相同,在此不再赘述。The specific implementation of the personalized recommendation device of the present application is basically the same as the embodiments of the personalized recommendation method described above, and will not be repeated here.
本申请实施例还提供一种因子分解机回归模型构建装置,所述因子分解机回归模型构建装置应用于因子分解机回归模型构建设备,所述因子分解机回归模型构建装置包括:The present application also provides a factor decomposition machine regression model construction device, which is applied to a factor decomposition machine regression model construction device, and the factor decomposition machine regression model construction device includes:
秘密共享模块,用于与第二设备进行秘密共享,获得秘密共享模型参数和秘密共享训练数据;A secret sharing module, used to perform secret sharing with the second device and obtain secret sharing model parameters and secret sharing training data;
纵向联邦模块,用于基于所述秘密共享训练数据和所述秘密共享模型参数,与所述第二设备进行纵向联邦学习建模,计算秘密共享回归误差;a longitudinal federation module, configured to perform longitudinal federated learning modeling with the second device based on the secret sharing training data and the secret sharing model parameters, and calculate a secret sharing regression error;
确定模块,用于基于所述秘密共享回归误差,确定第一目标回归模型参数,并协助所述第二设备确定第二目标回归模型参数,以构建纵向联邦因子分解机回归模型。A determination module is used to determine the first target regression model parameters based on the secret shared regression error, and assist the second device in determining the second target regression model parameters to construct a longitudinal federated factorization machine regression model.
可选地,所述纵向联邦模块包括:Optionally, the vertical federation module includes:
第一计算子模块,用于基于预设秘密共享机制,通过与所述第二设备进行联邦交互,计算所述第二类型共享参数和所述秘密共享训练数据共同对应的秘密共享交叉特征项内积;A first calculation submodule, configured to calculate, based on a preset secret sharing mechanism, a secret sharing cross-feature inner product corresponding to the second type of sharing parameter and the secret sharing training data by performing federated interaction with the second device;
第二计算子模块,用于基于所述预设秘密共享机制,通过与所述第二设备进行联邦交互,计算所述秘密共享交叉特征项内积、所述秘密共享训练数据、所述第一类型共享参数和所述第二类型共享参数共同对应的秘密共享中间参数;A second calculation submodule is used to calculate the secret sharing intermediate parameter corresponding to the secret sharing cross-feature term inner product, the secret sharing training data, the first type of sharing parameter and the second type of sharing parameter based on the preset secret sharing mechanism by performing federated interaction with the second device;
第三计算子模块,用于将所述秘密共享中间参数和所述秘密共享标签数据代入预设回归误差计算公式,计算所述秘密共享回归误差。The third calculation submodule is used to substitute the secret sharing intermediate parameter and the secret sharing label data into a preset regression error calculation formula to calculate the secret sharing regression error.
可选地,所述第一计算子模块包括:Optionally, the first calculation submodule includes:
第一计算单元,用于基于所述秘密共享乘法,通过与所述第二设备进行联邦交互,计算所述第一共享第二类型模型参数中各元素和所述第一共享训练数据中各元素之间的交叉内积,获得各第一元素交叉内积;A first calculation unit, configured to calculate, based on the secret sharing multiplication, a cross inner product between each element in the first shared second type model parameter and each element in the first shared training data by performing federated interaction with the second device, to obtain a cross inner product of each first element;
第二计算单元,用于基于所述秘密共享乘法,通过与所述第二设备进行联邦交互,计算所述第二方第二类型共享参数中各元素和所述第二共享训练数据中各元素之间的交叉内积,获得各第二元素交叉内积;A second calculation unit is configured to calculate, based on the secret sharing multiplication, a cross inner product between each element in the second type of shared parameter of the second party and each element in the second shared training data by performing federated interaction with the second device to obtain a cross inner product of each second element;
第三计算单元,用于分别对各所述第一元素交叉内积和各所述第二元素交叉内积进行累加,获得各所述第一元素交叉内积对应的所述第一交叉特征项内积和各所述第二元素交叉内积对应的所述第二交叉特征项内积。The third calculation unit is used to accumulate the first element cross inner products and the second element cross inner products respectively to obtain the first cross feature item inner products corresponding to the first element cross inner products and the second cross feature item inner products corresponding to the second element cross inner products.
可选地,所述第二计算子模块包括:Optionally, the second calculation submodule includes:
第四计算单元,用于基于所述秘密共享乘法,通过与所述第二设备进行联邦交互,计算所述第一类型共享参数和所述秘密共享训练数据共同对应的第一中间参数项;a fourth calculation unit, configured to calculate, based on the secret sharing multiplication, a first intermediate parameter item corresponding to the first type of shared parameter and the secret sharing training data by performing federated interaction with the second device;
第五计算单元,用于基于所述秘密共享加法和所述秘密共享乘法,计算所述秘密共享交叉特征项内积、所述秘密共享训练数据和所述第二类型共享参数共同对应的第二中间参数项;a fifth calculation unit, configured to calculate, based on the secret sharing addition and the secret sharing multiplication, a second intermediate parameter item corresponding to the inner product of the secret sharing cross-feature item, the secret sharing training data, and the second type of shared parameter;
第六计算单元,用于基于所述第一中间参数项和所述第二中间参数项,计算所述秘密共享中间参数。A sixth calculation unit is used to calculate the secret sharing intermediate parameter based on the first intermediate parameter item and the second intermediate parameter item.
可选地,所述秘密共享模块包括:Optionally, the secret sharing module includes:
获取子模块,用于获取第一方模型参数和第一方训练标签数据,并将所述第一方模型参数的第一份额作为所述第一共享参数;an acquisition submodule, configured to acquire a first-party model parameter and first-party training label data, and use a first share of the first-party model parameter as the first shared parameter;
第一发送子模块,用于将所述第一方模型参数的第二份额发送至所述第二设备,以供所述第二设备确定第三共享参数;A first sending submodule, configured to send a second share of the first-party model parameter to the second device, so that the second device can determine a third shared parameter;
第一接收子模块,用于接收所述第二设备发送的第二共享参数,其中,所述第二共享参数为第二设备获取的第二方模型参数的第二份额,且所述第二方模型参数的第一份额为所述第二设备的第四共享参数;A first receiving submodule, configured to receive a second shared parameter sent by the second device, wherein the second shared parameter is a second share of a second-party model parameter acquired by the second device, and the first share of the second-party model parameter is a fourth shared parameter of the second device;
第二发送子模块,用于将所述第一方训练标签数据的第一份额作为所述第一共享训练数据,并将所述第一方训练标签数据的第二份额发送至所述第二设备,以供所述第二设备确定第三共享训练数据;A second sending submodule, configured to use the first share of the first-party training label data as the first shared training data, and send the second share of the first-party training label data to the second device, so that the second device can determine third shared training data;
第二接收子模块,用于接收第二设备发送的第二共享训练数据,其中,所述第二共享训练数据为第二设备获取的第二方训练数据的第二份额,且所述第二方训练数据的第一份额为所述第二设备的第四共享训练数据。The second receiving submodule is used to receive second shared training data sent by the second device, wherein the second shared training data is the second share of the second-party training data acquired by the second device, and the first share of the second-party training data is the fourth shared training data of the second device.
可选地,所述确定模块包括:Optionally, the determining module includes:
更新子模块,用于基于所述秘密共享回归误差,对所述秘密共享模型参数进行更新,获得所述秘密共享回归模型更新参数;An updating submodule, configured to update the secret sharing model parameters based on the secret sharing regression error to obtain the secret sharing regression model update parameters;
解密子模块,用于基于所述秘密共享回归模型更新参数,通过与所述第二设备进行解密交互,确定所述第一目标回归模型参数,以供所述第二设备确定所述第二目标回归模型参数。A decryption submodule is used to update parameters based on the secret sharing regression model, and determine the first target regression model parameters by performing decryption interaction with the second device, so that the second device can determine the second target regression model parameters.
可选地,所述更新子模块包括:Optionally, the updating submodule includes:
第七计算单元,用于计算所述秘密共享回归误差关于所述第一秘密共享模型参数的第一梯度信息,并计算所述秘密共享回归误差关于所述第一共享第二类型模型参数的第二梯度信息;a seventh calculation unit, configured to calculate first gradient information of the secret shared regression error with respect to the first secret shared model parameter, and calculate second gradient information of the secret shared regression error with respect to the first shared second type model parameter;
第一更新单元,用于基于所述第一梯度信息和预设第一学习参数,更新所述第一秘密共享模型参数,直至满足预设联邦学习结束条件,获得所述第一共享回归模型参数;A first updating unit, configured to update the first secret sharing model parameters based on the first gradient information and a preset first learning parameter until a preset federated learning end condition is met, thereby obtaining the first shared regression model parameters;
第二更新单元,用于基于所述第二梯度信息和预设第二学习参数,更新所述第二秘密共享模型参数,直至满足所述预设联邦学习结束条件,获得所述第二共享回归模型参数。The second updating unit is used to update the second secret sharing model parameters based on the second gradient information and the preset second learning parameters until the preset federated learning end condition is met to obtain the second shared regression model parameters.
可选地,所述解密子模块包括:Optionally, the decryption submodule includes:
协助解密单元,用于接收所述第二设备基于纵向联邦学习建模确定的第一方模型更新参数第二份额,并将所述第二方模型更新参数第二份额至所述第二设备,以供所述第二设备基于纵向联邦学习建模确定的第二方模型更新参数第一份额和所述第二方模型更新参数第二份额,确定所述第二目标回归模型参数;an assisting decryption unit, configured to receive a second share of first-party model update parameters determined by the second device based on longitudinal federated learning modeling, and transmit the second share of the second-party model update parameters to the second device, so that the second device can determine the second target regression model parameters based on the first share of second-party model update parameters and the second share of second-party model update parameters determined by longitudinal federated learning modeling;
聚合单元,用于对所述第一方模型更新参数第一份额和所述第一方模型更新参数第二份额进行聚合,获得所述第一目标回归模型参数。An aggregation unit is used to aggregate the first share of the first-party model update parameters and the second share of the first-party model update parameters to obtain the first target regression model parameters.
本申请因子分解机回归模型构建装置的具体实施方式与上述因子分解机回归模型构建方法各实施例基本相同,在此不再赘述。The specific implementation of the factor decomposition machine regression model construction device of the present application is basically the same as the various embodiments of the factor decomposition machine regression model construction method mentioned above, and will not be repeated here.
本申请实施例还提供一种个性化推荐装置,所述个性化推荐装置应用于个性化推荐设备,所述个性化推荐装置包括:The embodiment of the present application further provides a personalized recommendation device, which is applied to a personalized recommendation device, and includes:
秘密共享模块,用于与第二设备进行秘密共享,获得秘密共享待推荐用户数据和秘密共享模型参数;A secret sharing module, used to perform secret sharing with the second device, and obtain secret sharing recommended user data and secret sharing model parameters;
评分模块,用于将所述秘密共享待推荐用户数据输入预设评分模型,以基于所述秘密共享模型参数,对所述秘密共享待推荐用户数据对应的待推荐物品进行评分,获得第一秘密共享评分结果;A scoring module, configured to input the secret sharing user data to be recommended into a preset scoring model, so as to score the recommended items corresponding to the secret sharing user data to be recommended based on the secret sharing model parameters, and obtain a first secret sharing scoring result;
计算模块,用于基于所述第一秘密共享评分结果,与所述第二设备进行联邦交互,以联合所述第二设备确定的第二秘密共享评分结果,计算目标评分;a calculation module, configured to perform a federation interaction with the second device based on the first secret sharing score result, so as to calculate a target score in conjunction with a second secret sharing score result determined by the second device;
生成模块,用于基于所述目标评分,生成所述待推荐物品对应的目标推荐列表。A generation module is used to generate a target recommendation list corresponding to the to-be-recommended items based on the target scores.
可选地,所述计算模块包括:Optionally, the calculation module includes:
接收单元,用于接收所述第二设备发送的所述第二共享第一方评分和所述第二共享第二方评分;a receiving unit, configured to receive the second shared first-party score and the second shared second-party score sent by the second device;
第一计算单元,用于基于所述第一共享第一方评分和所述第二共享第二方评分,计算第一方评分;a first calculation unit, configured to calculate a first-party score based on the first shared first-party score and the second shared second-party score;
第二计算单元,用于基于所述第一共享第二方评分和所述第二共享第二方评分,计算第二方评分;a second calculation unit, configured to calculate a second-party score based on the first shared second-party score and the second shared second-party score;
聚合单元,用于对所述第一方评分和所述第二方评分进行聚合,获得所述目标评分。An aggregation unit is used to aggregate the first-party score and the second-party score to obtain the target score.
本申请个性化推荐装置的具体实施方式与上述个性化推荐方法各实施例基本相同,在此不再赘述。The specific implementation of the personalized recommendation device of the present application is basically the same as the embodiments of the personalized recommendation method described above, and will not be repeated here.
本申请实施例提供了一种可读存储介质,且所述可读存储介质存储有一个或者一个以上程序,所述一个或者一个以上程序还可被一个或者一个以上的处理器执行以用于实现上述任一项所述的因子分解机回归模型构建方法的步骤。An embodiment of the present application provides a readable storage medium, and the readable storage medium stores one or more programs, and the one or more programs can also be executed by one or more processors to implement the steps of the factor decomposition machine regression model construction method described in any one of the above items.
本申请可读存储介质具体实施方式与上述因子分解机回归模型构建方法各实施例基本相同,在此不再赘述。The specific implementation of the readable storage medium of the present application is basically the same as the above-mentioned embodiments of the factor decomposition machine regression model construction method, and will not be repeated here.
本申请实施例提供了一种可读存储介质,且所述可读存储介质存储有一个或者一个以上程序,所述一个或者一个以上程序还可被一个或者一个以上的处理器执行以用于实现上述任一项所述的个性化推荐方法的步骤。An embodiment of the present application provides a readable storage medium, and the readable storage medium stores one or more programs, and the one or more programs can also be executed by one or more processors to implement the steps of any of the personalized recommendation methods described above.
本申请可读存储介质具体实施方式与上述个性化推荐方法各实施例基本相同,在此不再赘述。The specific implementation of the readable storage medium of the present application is basically the same as the embodiments of the above-mentioned personalized recommendation method, and will not be repeated here.
以上仅为本申请的优选实施例,并非因此限制本申请的专利范围,凡是利用本申请说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本申请的专利处理范围内。The above are only preferred embodiments of the present application, and are not intended to limit the patent scope of the present application. Any equivalent structure or equivalent process transformation made using the contents of the present application specification and drawings, or directly or indirectly applied in other related technical fields, are also included in the patent processing scope of the present application.
Claims (13)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202010893497.XA CN112000988B (en) | 2020-08-28 | 2020-08-28 | Factor decomposition machine regression model construction method, device and readable storage medium |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202010893497.XA CN112000988B (en) | 2020-08-28 | 2020-08-28 | Factor decomposition machine regression model construction method, device and readable storage medium |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN112000988A CN112000988A (en) | 2020-11-27 |
| CN112000988B true CN112000988B (en) | 2024-11-05 |
Family
ID=73465476
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202010893497.XA Active CN112000988B (en) | 2020-08-28 | 2020-08-28 | Factor decomposition machine regression model construction method, device and readable storage medium |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN112000988B (en) |
Families Citing this family (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN113240184B (en) * | 2021-05-21 | 2022-06-24 | 浙江大学 | Building space unit cold load prediction method and system based on federal learning |
| CN113505894B (en) * | 2021-06-02 | 2023-12-15 | 北京航空航天大学 | Longitudinal federal learning linear regression and logistic regression model training method and device |
| CN113536667B (en) * | 2021-06-22 | 2024-03-01 | 同盾科技有限公司 | Federated model training methods, devices, readable storage media and equipment |
| CN114358323B (en) * | 2021-12-29 | 2026-01-06 | 深圳前海新心数字科技有限公司 | Efficient Pearson Coefficient Calculation Method Based on Third Party in Federated Learning Environment |
Family Cites Families (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| EP4220464A1 (en) * | 2017-03-22 | 2023-08-02 | Visa International Service Association | Privacy-preserving machine learning |
| KR102337168B1 (en) * | 2019-01-11 | 2021-12-08 | 어드밴스드 뉴 테크놀로지스 씨오., 엘티디. | Logistic Regression Modeling Method Using Secret Sharing |
| CN110288094B (en) * | 2019-06-10 | 2020-12-18 | 深圳前海微众银行股份有限公司 | Model parameter training method and device based on federated learning |
| CN111079939B (en) * | 2019-11-28 | 2021-04-20 | 支付宝(杭州)信息技术有限公司 | Method and device for feature screening of machine learning model based on data privacy protection |
| CN111241567B (en) * | 2020-01-16 | 2023-09-01 | 深圳前海微众银行股份有限公司 | Data sharing method, system and storage medium in vertical federated learning |
| CN111259446B (en) * | 2020-01-16 | 2023-08-22 | 深圳前海微众银行股份有限公司 | Parameter processing method, device and storage medium based on federated transfer learning |
-
2020
- 2020-08-28 CN CN202010893497.XA patent/CN112000988B/en active Active
Non-Patent Citations (1)
| Title |
|---|
| Shared MF: A privacy-preserving recommendation system;Senci Ying;《arXiv》;20200818;摘要,第2页左栏第1段—第4页左栏第1段,图1-3 * |
Also Published As
| Publication number | Publication date |
|---|---|
| CN112000988A (en) | 2020-11-27 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN112149171B (en) | Method, device, equipment and storage medium for training federal neural network model | |
| CN112000988B (en) | Factor decomposition machine regression model construction method, device and readable storage medium | |
| CN112733967B (en) | Model training method, device, equipment and storage medium for federal learning | |
| CN112000987A (en) | Factorization machine classification model construction method and device and readable storage medium | |
| CN111340247B (en) | Longitudinal federal learning system optimization method, device and readable storage medium | |
| CN112016698B (en) | Factorization machine model construction method, factorization machine model construction equipment and readable storage medium | |
| WO2021120676A1 (en) | Model training method for federated learning network, and related device | |
| WO2021092980A1 (en) | Longitudinal federated learning optimization method, apparatus and device, and storage medium | |
| Cai et al. | Leveraging crowdsensed data streams to discover and sell knowledge: A secure and efficient realization | |
| US20200019865A1 (en) | System and method for processing data and managing information | |
| CN111985573B (en) | Method, device and readable storage medium for building factor decomposition machine classification model | |
| WO2021092977A1 (en) | Vertical federated learning optimization method, appartus, device and storage medium | |
| JP2020525814A (en) | Logistic regression modeling method using secret sharing | |
| CN111428887A (en) | Model training control method, device and system based on multiple computing nodes | |
| CN111291273A (en) | Recommendation system optimization method, device, equipment and readable storage medium | |
| CN109598385A (en) | Anti money washing combination learning method, apparatus, equipment, system and storage medium | |
| WO2023124219A1 (en) | Joint learning model iterative update method, apparatus, system, and storage medium | |
| CN114492850A (en) | Model training method, device, medium, and program product based on federal learning | |
| CN111368314B (en) | Modeling and prediction method, device, equipment and storage medium based on cross characteristics | |
| CN114330673A (en) | Method and device for performing multi-party joint training on business prediction model | |
| CN114742239A (en) | Method and device for training financial insurance claims risk model based on federated learning | |
| CN112633356B (en) | Recommendation model training method, recommendation device, recommendation equipment and storage medium | |
| CN114648666A (en) | Classification model training and data classification method and device and electronic equipment | |
| CN114358323B (en) | Efficient Pearson Coefficient Calculation Method Based on Third Party in Federated Learning Environment | |
| CN117251805A (en) | Federated gradient boosting decision tree model update system based on breadth-first algorithm |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |