[go: up one dir, main page]

CN113837399B - Training method, device, system, storage medium and equipment for federal learning model - Google Patents

Training method, device, system, storage medium and equipment for federal learning model Download PDF

Info

Publication number
CN113837399B
CN113837399B CN202111248479.7A CN202111248479A CN113837399B CN 113837399 B CN113837399 B CN 113837399B CN 202111248479 A CN202111248479 A CN 202111248479A CN 113837399 B CN113837399 B CN 113837399B
Authority
CN
China
Prior art keywords
local
model
parameters
global
updated
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111248479.7A
Other languages
Chinese (zh)
Other versions
CN113837399A (en
Inventor
马鑫
包仁义
徐松
畅绍政
雷江涛
刘兵
张凯
蒋锦鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yidu Cloud Beijing Technology Co Ltd
Original Assignee
Yidu Cloud Beijing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yidu Cloud Beijing Technology Co Ltd filed Critical Yidu Cloud Beijing Technology Co Ltd
Priority to CN202111248479.7A priority Critical patent/CN113837399B/en
Publication of CN113837399A publication Critical patent/CN113837399A/en
Application granted granted Critical
Publication of CN113837399B publication Critical patent/CN113837399B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Medical Informatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Feedback Control In General (AREA)
  • Machine Translation (AREA)

Abstract

The application discloses a training method, device, system, storage medium and equipment of a federal learning model, wherein the method is applied to a user side and comprises the following steps: model training is carried out through local data, and a local model are obtained; determining local weight parameters corresponding to the local model according to the difference degrees of the local model and the local model; transmitting local model parameters and local weight parameters corresponding to the local model to a server, so that the server updates global model parameters according to the local model parameters and the local weight parameters; receiving updated global model parameters from the server; updating the local model parameters and the local weight parameters according to the updated global model parameters to obtain updated local model parameters and updated local weight parameters; by the method, the global model with better model performance can be obtained.

Description

Training method, device, system, storage medium and equipment for federal learning model
Technical Field
The application relates to the technical field of federal learning, in particular to a training method, a training device, a training system, a training storage medium and training equipment for federal learning models.
Background
Federal machine learning is a machine learning framework, can effectively solve the problem of data island, enables participants to jointly model on the basis of not sharing data, can technically break the data island, and realizes AI collaboration. The federal averaging algorithm (FedAVg) performs well when dealing with independent co-distributed data, and is equivalent to a centralised algorithm.
However, in practical application, because the global model is mainly formed by aggregation of local models of each client, and the standards of the production data of each client are inconsistent, the characteristics are inconsistent, the distribution is inconsistent, and the like, when the data of each client is not independently and uniformly distributed, namely Non-IID data, a client drift phenomenon is caused, and therefore, the performance of the federal average algorithm is attenuated.
Disclosure of Invention
In order to solve the above problems in the background art, the embodiments of the present application provide a training method, apparatus, storage medium and device for a federal learning model.
According to a first aspect of the present application, there is provided a method of training a federal learning model, the method comprising: model training is carried out through local data, and a local model are obtained; determining local weight parameters corresponding to the local model according to the difference degrees of the local model and the local model; transmitting local model parameters and local weight parameters corresponding to the local model to a server, so that the server updates global model parameters according to the local model parameters and the local weight parameters; receiving updated global model parameters from the server; and updating the local model parameters and the local weight parameters according to the updated global model parameters to obtain updated local model parameters and updated local weight parameters.
According to an embodiment of the present application, the determining, according to the local model and the degree of difference between the local models, a local weight parameter corresponding to the local model includes: determining a local gradient vector from the local model; determining a local gradient vector according to the local model; determining a difference angle according to the local gradient vector and the local gradient vector; and determining local weight parameters corresponding to the local model based on the difference included angle.
According to an embodiment of the present application, the determining, based on the difference angle, a local weight parameter corresponding to the local model includes: determining a sample ratio of local data to global data; determining an adjustment factor according to the difference angle; wherein the difference included angle is inversely proportional to the numerical value of the adjustment factor; and integrating the sample proportion and the adjustment factor to obtain the local weight parameter.
According to an embodiment of the present application, the model training through local data to obtain a local model and a local model includes: receiving an initialization model from the server; setting a first model parameter of the initialization model to obtain an initialization local model; setting second model parameters of the initialization model to obtain an initialization local model; training the initialized local model through the local data to obtain the local model; and training the initialized local model through the local data to obtain the local model.
According to a second aspect of the present application, there is provided a method of training a federal learning model, the method comprising: receiving local model parameters and local weight parameters from a plurality of user terminals; updating global model parameters according to a plurality of the local model parameters and the local weight parameters; and sending the updated global model parameters to each user side so that each user side updates the local model parameters and the local weight parameters according to the updated global model parameters to obtain updated local model parameters and updated local weight parameters.
According to an embodiment of the present application, the updating the global model parameter according to the plurality of local model parameters and the local weight parameter includes: normalizing the local weight parameters to obtain global weight parameters; and aggregating according to the local model parameters and the global weight parameters to update parameters of the global model and obtain the parameters of the updated global model.
According to a third aspect of the present application, there is provided a training apparatus for a federal learning model, the apparatus comprising: the model training module is used for carrying out model training through local data to obtain a local model and a local model; the parameter determining module is used for determining local weight parameters corresponding to the local model according to the difference degree of the local model and the local model; the first sending module is used for sending the local model parameters and the local weight parameters corresponding to the local model to a server, so that the server updates the global model parameters according to the local model parameters and the local weight parameters; the first receiving module is used for receiving updated global model parameters from the server; and the first updating module is used for updating the local model parameters and the local weight parameters according to the updated global model parameters to obtain updated local model parameters and updated local weight parameters.
According to an embodiment of the present application, the parameter determining module includes: determining a local gradient vector from the local model; determining a local gradient vector according to the local model; determining a difference angle according to the local gradient vector and the local gradient vector; and determining local weight parameters corresponding to the local model based on the difference included angle.
According to an embodiment of the present application, the parameter determining module includes: determining a sample ratio of local data to global data; determining an adjustment factor according to the difference angle; wherein the difference included angle is inversely proportional to the numerical value of the adjustment factor; and integrating the sample proportion and the adjustment factor to obtain the local weight parameter.
According to an embodiment of the present application, the model training module includes: the receiving submodule is used for receiving the initialization model from the server; the setting sub-module is used for setting a first model parameter of the initialization model to obtain an initialization local model; the setting submodule is further used for setting second model parameters of the initialization model to obtain an initialization local model; the training sub-module is used for training the initialized local model through the local data to obtain the local model; the training sub-module is further configured to train the initialized local model through the local data, and obtain the local model.
According to a fourth aspect of the present application, there is provided a training apparatus for a federal learning model, the apparatus comprising: the second receiving module is used for receiving the local model parameters and the local weight parameters from the plurality of user terminals; a second updating module, configured to update global model parameters according to a plurality of the local model parameters and the local weight parameters; and the second sending module is used for sending the updated global model parameters to each user side so that each user side updates the local model parameters and the local weight parameters according to the updated global model parameters to obtain updated local model parameters and updated local weight parameters.
According to an embodiment of the present application, the second updating module includes: the normalization sub-module is used for carrying out normalization processing on the local weight parameters to obtain global weight parameters; and the aggregation sub-module is used for aggregating according to the local model parameters and the global weight parameters so as to update the parameters of the global model and obtain the parameters of the updated global model.
According to a fifth aspect of the present application, there is provided a training system of a federal learning model, the system including a server and a plurality of clients, the server including: the second receiving module is used for receiving the local model parameters and the local weight parameters from the plurality of user terminals; a second updating module, configured to update global model parameters according to a plurality of the local model parameters and the local weight parameters; the second sending module is used for sending the updated global model parameters to each user side so that each user side updates the local model parameters and the local weight parameters according to the updated global model parameters to obtain updated local model parameters and updated local weight parameters; the user terminal comprises: the model training module is used for carrying out model training through local data to obtain a local model and a local model; the parameter determining module is used for determining local weight parameters corresponding to the local model according to the difference degree of the local model and the local model; the first sending module is used for sending the local model parameters and the local weight parameters corresponding to the local model to a server, so that the server updates the global model parameters according to the local model parameters and the local weight parameters; the first receiving module is used for receiving updated global model parameters from the server; and the first updating module is used for updating the local model parameters and the local weight parameters according to the updated global model parameters to obtain updated local model parameters and updated local weight parameters.
According to a sixth aspect of the present application, there is provided a computer device comprising: a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the method of performing any of the above when the program is executed.
According to a seventh aspect of the present application there is provided a storage medium containing computer executable instructions for performing the method of any one of the preceding claims when executed by a computer processor.
According to the training method, device, system, storage medium and equipment for the federal learning model, the local model and the local model are obtained through local data training, local weight parameters are determined according to the difference degrees of the local model and the local model, so that the global model can be updated according to each local model parameter and each corresponding local weight parameter, the updated global model parameters are obtained, the updated global model parameters are updated by the updated global model parameters, the difference degrees of the local model and the updated local model are re-determined, the updated local weight parameters are obtained, the global model is updated again by the updated local model parameters and the updated local weight parameters, and the global model can be updated correspondingly to different local weight parameters each time, through adjustment of the dynamic weight coefficients, the global model can be better learned to deflection information of the local model, the global model has better convergence, and better model performance is obtained.
It should be understood that the teachings of the present application are not required to achieve all of the above-described benefits, but rather that certain technical solutions may achieve certain technical effects, and that other embodiments of the present application may also achieve benefits not mentioned above.
Drawings
The above, as well as additional purposes, features, and advantages of exemplary embodiments of the present application will become readily apparent from the following detailed description when read in conjunction with the accompanying drawings. Several embodiments of the present application are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings, in which:
in the drawings, the same or corresponding reference numerals indicate the same or corresponding parts.
FIG. 1 is a schematic diagram of an implementation flow of a training method of a federal learning model according to an embodiment of the present application;
FIG. 2 shows a second implementation flow chart of a training method of a federal learning model according to an embodiment of the present application;
FIG. 3 shows a third implementation flow diagram of a training method of a federal learning model according to an embodiment of the present application;
FIG. 4 is a schematic diagram of an implementation scenario of a training method of a federal learning model according to an embodiment of the present application;
FIG. 5 shows a schematic diagram of an implementation module of a training device of a federal learning model according to an embodiment of the present application;
FIG. 6 is a schematic diagram of an implementation module of a training device of a federal learning model according to another embodiment of the present application;
FIG. 7 is a schematic diagram of an implementation device of a training system of a federal learning model according to an embodiment of the present application;
fig. 8 shows a schematic block diagram of an electronic device according to an embodiment of the application.
Detailed Description
The principles and spirit of the present application will be described below with reference to several exemplary embodiments. It should be understood that these embodiments are presented merely to enable one skilled in the art to better understand and practice the present application and are not intended to limit the scope of the present application in any way. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
The technical scheme of the present application is further elaborated below with reference to the drawings and specific embodiments.
Fig. 1 shows a schematic implementation flow diagram of a training method of a federal learning model according to an embodiment of the present application.
Referring to fig. 1, according to a first aspect of the present application, there is provided a training method of a federal learning model, where the method is applied to a user side, and the method includes: operation 101, performing model training through local data to obtain a local model and a local model; an operation 102, determining local weight parameters corresponding to the local model according to the difference degree between the local model and the local model; operation 103, sending local model parameters and local weight parameters corresponding to the local model to the server, so that the server updates the global model parameters according to the local model parameters and the local weight parameters; operation 104, receiving updated global model parameters from a server; and 105, updating the local model parameters and the local weight parameters according to the updated global model parameters to obtain updated local model parameters and updated local weight parameters.
According to the training method of the federal learning model, the local model and the local model are obtained through local data training at a user side, the local model is used as a basis, corresponding local weight parameters of the local model are determined according to the difference degrees of the local model and the local model, so that the global model can be updated according to each local model parameter and each corresponding local weight parameter, the updated global model parameters are obtained, the updated global model parameters are then utilized to update the local model parameters, the difference degrees of the local model and the updated local model are re-determined, the updated local weight parameters are obtained, the global model is further updated again by utilizing the updated local model parameters and the updated local weight parameters, and the like, each update of the global model can correspond to different local weight parameters, the global model can be better learned to deflection information of the local model through adjustment of the dynamic weight coefficients, the global model has better convergence, and therefore better accuracy of the global model is achieved, and better performance is obtained.
In operation 101 of the method, the user side refers to a data participant that needs to train the global model, and it can be understood that in the federal learning model training process, there are a plurality of user sides, and the method can be applied to each user side. The local data is training data provided by the user side for obtaining the global model, and the local data may be data with a private data property, for example, when the user side is a hospital, the local data may be case data of the hospital. Further, global data is used to characterize the set of all local data. It should be added that in the first round of training, the initialization model used for training to obtain the local model and the local model may be a model with the same initialization parameters or a model with different initialization parameters, as required. The local model refers to a model obtained by training only through local data, and the local model does not participate in aggregation and updating of the global model. A local model refers to a model that participates in the aggregation of global models, which is trained by local data, and parameter updates are performed according to global model parameters from the global model.
In operation 102 of the method, local data of different user terminals are suitable for showing features of Non-independent co-distribution (Non-IID), the local model is used as an aggregation basis of a global model, and as the local model is updated, the difference between the local data and other user terminal data can be shown by using the difference between the local model and the local model, so that local weight parameters corresponding to the local model can be determined by using the difference, the local weight parameters are used for aggregation of the global model, and it can be understood that the difference can be corresponding to local weight parameters with different values through different calculation formulas, and further the global model can show different characteristics. For example, when the difference between the local model and the local model is larger, the difference between the local data and other user side data is larger, in this case, if the obtained local weight parameter is smaller through formula calculation, the influence of the local data with larger difference on the global model can be smaller, so that the characterization of the global model is more prone to the main tendency characterization of the data for training; in another case, if the obtained local weight parameters are larger through formula calculation, the influence of local data with larger difference on the global model can be larger, so that the global model can represent different tendencies of the data which are more prone to training. For example, in the training process of a federal learning model, the user side may be different hospitals in different areas or different hospitals in the same area, the local data is medical data corresponding to the hospitals, wherein several hospitals exist to obtain patient characteristics of the corresponding local data, and the detection standard has a larger difference from patient characteristics and detection standards characterized in other areas, in this case, if a detection model suitable for a patient tends to be obtained, the condition is that the larger the difference value between the local model and the local model is, the smaller the value of the local weight parameter is, so as to reduce the influence of the medical data of the hospitals with larger differences on the global model. Conversely, the conditions may also be set as: the smaller the difference data between the local model and the local model, the smaller the value of the local weight parameter.
In operation 103 of the method, each ue sends the local model parameters and the local weight parameters corresponding to the local model to the server. The server refers to an end for performing aggregation update on the global model, and may be a server independent of each user end or a specific device with a data processing function. It should be added that, according to different global model updating methods, the local model parameters may be one or more of all model parameters, a part of model parameters, and model parameters related to the local model, specifically, the following steps: gradient parameters, variable parameters, etc. of the local model. The server side aggregates the local model parameters and the local weight parameters from each user side to obtain global model parameters corresponding to the global model, and updates the global model parameters. It will be appreciated that in the first round of updating, there may be no degree of difference between the local model and the local model, in which case the method may use preset weight coefficients to cause the global model to update the global model parameters; other conditions may also be set to obtain corresponding local weight parameters, for example, weight coefficients may be determined by the proportion of local data to total training data. With the aggregation updating of the global model, if the difference between the local data and other training data is larger, the difference between the local model and the local model is reflected.
In the method operation 104, updated global model parameters from the server are received, and similarly, the updated global model parameters sent by the server to the client may be one or more of all global model parameters, a portion of global model parameters, and parameters related to the global model parameters according to different policies of the federal learning algorithm. And will not be described below.
In the method operation 105, the local model parameters are updated with the updated global model parameters to obtain an updated global model; then, in the same operation 102, the updated local weight parameters are determined according to the difference between the updated global model and the local model, and then the updated local model parameters and the local weight parameters may be sent to the server, so that the server aggregates and updates the global model, and so on until the global model meeting the set requirements is obtained. It should be added that, according to different federal learning algorithms, the method updates the local model parameters by using the updated global model parameters to obtain an updated global model, and then trains the local model and the updated global model again by using the local data, and then compares the difference between the trained local model and the trained local model to determine the updated local weight parameters with operation 102. It will be appreciated that with each round of training, the local model is not updated according to the global model, whereas the local model needs to be updated according to the global model, i.e. with each round of training of the model, the model parameters of the local model and the local model will be different. It may be apparent that the updated local weight parameter is different from the value of the local weight parameter before updating.
In summary, the method mainly uses the local model as a basis, and realizes the dynamic adjustment of local weight parameters by using the difference degree of the local model and the local model updated along with the federal learning algorithm, so that the global model can better learn the deflection information of all the local models in the aggregation process, obtain the global model with better convergence, and achieve better model performance.
Fig. 2 shows a second implementation flow chart of a training method of the federal learning model according to an embodiment of the present application.
Referring to fig. 2, according to an embodiment of the present application, operation 102, determining a local weight parameter corresponding to a local model according to a degree of difference between the local model and the local model includes: operation 1021, determining a local gradient vector from the local model; operation 1022, determining a local gradient vector from the local model; operation 1023, determining a difference angle according to the local gradient vector and the local gradient vector; in operation 1024, local weight parameters corresponding to the local model are determined based on the differential angle.
It will be appreciated that the degree of difference between the local model and the local model may be evaluated by various criteria, such as a difference between a layer of the local model and the local model, a difference between output results of the local model and the local model, etc., and the following provides an embodiment of determining the local weight parameter by using gradient vector included angles of the local model and the local model as the degree of difference.
In operation 1021 of the method, a gradient record corresponding to the local model is loaded according to training of the local model, and then a local gradient vector corresponding to the local model is determined using the gradient record. In the method operation 1022, similarly, the gradient records of the corresponding local model are loaded according to the training of the local model, and then the local gradient vector corresponding to the local model is determined using the gradient records of the local model. It should be understood that the method operations 1021 and 1022 are merely used as distinction between operations, and do not represent that there is a relationship between the two operations, i.e., the method operations 1021 and 1022 may be performed simultaneously, or the operations 1022 may be performed first and then the operations 1021 may be performed.
In operation 1023 of the method, an included angle formed by the local gradient vector and a specific angle value of the included angle may be obtained by the local gradient vector and the local gradient vector. The angle may be used to characterize the degree of variability of the local model and the local model, i.e., the degree of variability angle. In operation 1024, the corresponding local weight parameters can be obtained by performing formula conversion on the difference angle through the set function. From the foregoing, it can be appreciated that the differential angle may be directly proportional to the value of the local weight parameter, or the differential angle may be inversely proportional to the value of the weight coefficient.
According to an embodiment of the present application, operation 1024, determining a local weight parameter corresponding to the local model based on the difference angle includes: firstly, determining the sample proportion of local data and global data; then, determining an adjustment factor according to the difference angle; wherein, the difference included angle is inversely proportional to the value of the adjusting factor; and finally, integrating the sample proportion and the adjustment factor to obtain the local weight parameter.
It will be appreciated that from the above description, since in the first round of updating the parameters of the local model and the local model are identical, the method may introduce global data, which is used to refer to the collection of local data for all clients, based on this. The method can determine the sample proportion through the proportion of the local data and the global data, and takes the sample proportion as a reference condition of the weight coefficient. And then, the sample proportion is adjusted by using the difference included angle as an adjustment factor, so as to obtain a weight coefficient. By applying the method to determine the weight coefficient, the weight coefficient can also carry the reference information of the sample proportion, and the method can avoid the influence on the global model being larger under the condition that some user side samples are too few based on the sample proportion, so that the obtained global model can show the tendency of most global data.
In one embodiment, the sample ratio may be determined using the following formula:
Figure BDA0003321900760000111
in the formula, i is used for representing the ith user end; n is used for representing the total number of the user terminals; s is S i Sample size, X, for characterizing local data of an ith client i For characterizing sample proportions; k is used to characterize the kth iteration, it being understood that k is a positive integer greater than or equal to 1.
In one embodiment, the differential angle may be determined using the following formula:
Figure BDA0003321900760000112
in the course of this formula (ii) the formula,
Figure BDA0003321900760000113
the method is used for representing the included angle of the difference degree;
Figure BDA0003321900760000114
For characterizing a local gradient vector;
Figure BDA0003321900760000115
for characterizing local gradient vectors.
In one embodiment, the local weight parameters may be determined using the following formula:
Figure BDA0003321900760000116
in this formula, the formula characterizes:
Figure BDA0003321900760000117
the local weight parameters are used for representing local weight parameters corresponding to the kth iteration of the ith user side; lambda and rho are preset super parameters for adjusting the size of the local weight parameters, and can be preset manually according to requirements.
In another case, the method can further adjust the local weight parameters at the server to obtain the local weight parameters with normalization, so as to meet the requirement of the server on normalization of the local weight parameters. The method can be characterized as follows:
Figure BDA0003321900760000121
In this formula, the formula characterizes:
Figure BDA0003321900760000122
and the global weight parameter is used for representing the global weight parameter corresponding to the kth iteration of the ith user side.
Based on the method, the server can obtain the local weight parameters and the global weight parameters corresponding to each user terminal, and according to the parameter aggregation formula set by the federal learning model, the server can aggregate the local model parameters by using the local weight parameters or the global weight formula so as to obtain the global model parameters.
According to an embodiment of the present application, operation 101, performing model training by local data to obtain a local model and a local model, includes: firstly, receiving an initialization model from a server; then, performing first model parameter setting on the initialization model to obtain an initialization local model; thirdly, setting second model parameters of the initialization model to obtain an initialization local model; training the initialized local model through local data to obtain a local model; and finally, training the initialized local model through local data to obtain the local model.
In the method, an initialization model for training a local model and a local model is from a server, and the server sends the initialization model to each user. The initialization model sent by the server to each client may be the same or different, as desired. And then, setting initialization model parameters for model training by the user side to obtain an initialization local model and an initialization local model. And then training the initialized local model and the initialized local model through local data, and obtaining the local model and the local model for the first round of aggregation after finishing the set iteration standard. And by analogy, after the local model is updated by using the global model parameters, the local model and the updated local model can be trained by local data, and after the set iteration standard is completed, the local model and the local model for the second round of aggregation can be obtained.
Specifically, the aggregate update to the global model may be formulated as follows:
Figure BDA0003321900760000123
l (θ) is used in the formula to characterize a loss function corresponding to the global model; l (L) ik ) The method is used for representing a loss function corresponding to the user side i; θ k+1 Local model parameters for characterizing the k+1th iteration; θ k For characterizing the local model parameters of the kth iteration.
Fig. 3 shows a third implementation flow chart of a training method of the federal learning model according to an embodiment of the present application.
Referring to fig. 3, according to a second aspect of the present application, there is provided a training method of a federal learning model, where the method is applied to a server, and the method includes: operation 301, receiving local model parameters and local weight parameters from a plurality of user terminals; an operation 302 of updating global model parameters based on a plurality of local model parameters and local weight parameters; and 303, transmitting the updated global model parameters to each user side, so that each user side updates the local model parameters and the local weight parameters according to the updated global model parameters to obtain updated local model parameters and updated local weight parameters.
According to the training method of the federal learning model, the local model and the local model are obtained through local data training at a user side, corresponding local weight parameters of the local model are determined according to the difference degrees of the local model and the local model, so that the global model can update the global model parameters according to each local model parameter and each corresponding local weight parameter, the updated global model parameters are obtained, then the updated global model parameters are utilized to update the local model parameters, the difference degrees of the local model and the updated local model are re-determined, the updated local weight parameters are obtained, the global model is further updated again by utilizing the updated local model parameters and the updated local weight parameters, and accordingly, each update of the global model can correspond to different local weight parameters.
The server refers to a server for training the global model or other specific electronic devices with data processing capability, and is further configured to receive local model parameters and local weight parameters from each user side to implement parameter aggregation for all the user sides, so as to implement parameter update for the global model.
Before the method operation 301, the server sends an initialization model corresponding to the client, so that the client trains the initialization model through local data to obtain a corresponding local model and a local model. After each user side obtains the local model and the local model, the local weight parameters corresponding to the local model are obtained through difference degree comparison of the local model and the local model. In operation 301, the server receives local model parameters and local weight parameters from each client.
In operation 302, the server implements an update to the global model parameters by aggregating the local model parameters and the local weight parameters to obtain updated global model parameters.
In operation 303, the server sends the updated global model parameters to each client, so that each client updates the local model parameters according to the updated global model parameters, so as to perform the next iteration. In the next iteration, after the local model trains the updated local model, the difference degree between the local model and the local model is redetermined, so that the corresponding local weight coefficient is redetermined, namely the updated local weight coefficient, the server side receives the updated local model parameters from the user side and the updated local weight coefficient, and can perform further round of aggregation and updating on the global model parameters, and the like, so that the global model is updated in multiple rounds, and finally the target global model is obtained.
According to an embodiment of the present application, operation 301, updating global model parameters according to a plurality of local model parameters and local weight parameters, includes: firstly, carrying out normalization processing on local weight parameters to obtain global weight parameters; and then, aggregating according to the local model parameters and the global weight parameters to update the parameters of the global model, and obtaining the parameters for updating the global model.
As can be seen from the description in the corresponding operation 1024, in the process of updating the global model parameters by using the local weight parameters, the server needs to normalize each local weight parameter to achieve unification on data representation.
To facilitate a further understanding of the above embodiments, a specific implementation scenario is provided below for illustration.
Fig. 4 shows a schematic diagram of an implementation scenario of a training method of a federal learning model according to an embodiment of the present application.
Referring to fig. 4, in this implementation scenario, the method is applied to a server side and a plurality of user sides, respectively. The user side takes hospitals in different areas as units, and the server side is in communication connection with each user side and is used for training a detection model corresponding to medical data.
Firstly, a server sends an initialization model to each user terminal, wherein parameters of the initialization model are as follows
Figure BDA0003321900760000141
Then, the user side sets the local model parameter and the local model parameter of the initialization model, and in the initialization state, the local model parameter and the initialization parameter of the local model parameter are the same,
Figure BDA0003321900760000142
then, the user side trains the initialized local model and the local model, and loads gradient records of the local model and the local model after each round of training
Figure BDA0003321900760000151
And calculates the vector angle from the gradient recordings>
Figure BDA0003321900760000152
Obtaining local weight parameter->
Figure BDA0003321900760000153
The user side sends local model parameters and local weight parameters corresponding to the local model>
Figure BDA0003321900760000154
To the server side.
Then, the server receives the local weight parameters of each user terminal
Figure BDA0003321900760000155
And carrying out normalization processing to obtain global weight parameters corresponding to each local model, and aggregating the global weight parameters and the local model parameters by the server side to update the global model parameters. And the server side sends the updated global model parameters to each user side so that each user side updates the local model parameters.
And then, carrying out next training by using the updated local model and the local model, and the like, and loading gradient records of the corresponding local model and the local model when each round of training is completed so as to realize the purpose of dynamically determining the corresponding weight parameters in each round, thereby realizing the purpose of updating the global model through the dynamic weight parameters and finally obtaining the global model with higher accuracy.
Fig. 5 shows a schematic diagram of an implementation module of a training device of a federal learning model according to an embodiment of the present application.
Referring to fig. 5, according to a third aspect of the present application, there is provided a training apparatus of a federal learning model, where the apparatus is applied to a user side, and the apparatus includes: the model training module 501 is configured to perform model training through local data to obtain a local model and a local model; the parameter determining module 502 is configured to determine a local weight parameter corresponding to the local model according to the local model and the degree of difference between the local model; a first sending module 503, configured to send local model parameters and local weight parameters corresponding to the local model to the server, so that the server updates the global model parameters according to the local model parameters and the local weight parameters; a first receiving module 504, configured to receive updated global model parameters from a server; the first updating module 505 is configured to update the local model parameter and the local weight parameter according to the updated global model parameter, and obtain the updated local model parameter and the updated local weight parameter.
According to an embodiment of the present application, the parameter determining module 502 includes: determining a local gradient vector from the local model; determining a local gradient vector according to the local model; determining a difference angle according to the local gradient vector and the local gradient vector; and determining local weight parameters corresponding to the local model based on the difference included angle.
According to an embodiment of the present application, the parameter determining module 502 includes: determining a sample ratio of local data to global data; determining an adjustment factor according to the difference angle; wherein, the difference included angle is inversely proportional to the value of the adjusting factor; and integrating the sample proportion and the adjustment factor to obtain the local weight parameter.
According to one embodiment of the present application, model training module 501 includes: a receiving submodule 5011, configured to receive an initialization model from a server; the setting submodule 5012 is used for setting a first model parameter of the initialization model to obtain an initialization local model; the setting submodule 5012 is further used for setting second model parameters of the initialization model to obtain an initialization local model; training submodule 5013, configured to train the initialized local model through local data to obtain a local model; the training submodule 5013 is further used for training the initialized local model through local data to obtain the local model.
Fig. 6 shows a schematic diagram of an implementation module of a training device of a federal learning model according to another embodiment of the present application.
Referring to fig. 6, according to a fourth aspect of the present application, there is provided a training apparatus of a federal learning model, where the apparatus is applied to a server, and the apparatus includes: a second receiving module 601, configured to receive local model parameters and local weight parameters from a plurality of user terminals; a second updating module 602, configured to update the global model parameter according to the plurality of local model parameters and the local weight parameter; the second sending module 603 is configured to send the updated global model parameter to each user side, so that each user side updates the local model parameter and the local weight parameter according to the updated global model parameter, and obtains the updated local model parameter and the updated local weight parameter.
According to an embodiment of the present application, the second updating module 602 includes: the normalization submodule 6021 is used for carrying out normalization processing on the local weight parameters to obtain global weight parameters; the aggregation submodule 6022 is configured to aggregate according to the local model parameter and the global weight parameter, so as to update the parameter of the global model, and obtain the parameter of the updated global model.
Fig. 7 shows a schematic diagram of an implementation device of a training system of the federal learning model according to an embodiment of the present application.
Referring to fig. 7, according to a fifth aspect of the present application, there is provided a training system of a federal learning model, the system including a server 600 and a plurality of clients 500, the server 600 including: a second receiving module 601, configured to receive local model parameters and local weight parameters from a plurality of user terminals; a second updating module 602, configured to update the global model parameter according to the plurality of local model parameters and the local weight parameter; the second sending module 603 is configured to send the updated global model parameter to each user side, so that each user side updates the local model parameter and the local weight parameter according to the updated global model parameter, and obtains the updated local model parameter and the updated local weight parameter. The client 600 includes: the model training module 501 is configured to perform model training through local data to obtain a local model and a local model; the parameter determining module 502 is configured to determine a local weight parameter corresponding to the local model according to the local model and the degree of difference between the local model; a first sending module 503, configured to send local model parameters and local weight parameters corresponding to the local model to the server, so that the server updates the global model parameters according to the local model parameters and the local weight parameters; a first receiving module 504, configured to receive updated global model parameters from a server; the first updating module 505 is configured to update the local model parameter and the local weight parameter according to the updated global model parameter, and obtain the updated local model parameter and the updated local weight parameter.
It should be noted here that: the above description of the training apparatus and system embodiments for the federal learning model, which are similar to the description of the method embodiments described above with reference to fig. 1 to 4, has similar advantageous effects to the method embodiments described above with reference to fig. 1 to 4, and thus will not be repeated. For technical details not disclosed in the embodiment of the configuration information display device of the present application, please refer to the description of the method embodiments shown in fig. 1 to 4, which are described in the foregoing, for economy of description, and therefore will not be repeated.
According to a sixth aspect of the present application, there is provided a computer device comprising: memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing a method of performing any of the above when the program is executed.
According to a seventh aspect of the present application there is provided a storage medium containing computer executable instructions for performing the method of any one of the above when executed by a computer processor.
According to embodiments of the present application, an electronic device and a readable storage medium are also provided.
Fig. 8 shows a schematic block diagram of an example electronic device 800 that may be used to implement embodiments of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the application described and/or claimed herein.
As shown in fig. 8, the apparatus 800 includes a computing unit 801 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 802 or a computer program loaded from a storage unit 808 into a Random Access Memory (RAM) 803. In the RAM 803, various programs and data required for the operation of the device 800 can also be stored. The computing unit 801, the ROM 802, and the RAM 803 are connected to each other by a bus 804. An input/output (I/O) interface 805 is also connected to the bus 804.
Various components in device 800 are connected to I/O interface 805, including: an input unit 806 such as a keyboard, mouse, etc.; an output unit 807 such as various types of displays, speakers, and the like; a storage unit 808, such as a magnetic disk, optical disk, etc.; and a communication unit 809, such as a network card, modem, wireless communication transceiver, or the like. The communication unit 809 allows the device 800 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
The computing unit 801 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 801 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 801 performs the various methods and processes described above, such as the training method of the federal learning model. For example, in some embodiments, the training method of the federal learning model may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as the storage unit 808. In some embodiments, part or all of the computer program may be loaded and/or installed onto device 800 via ROM 802 and/or communication unit 809. When the computer program is loaded into RAM 803 and executed by computing unit 801, one or more steps of the federal learning model training method described above may be performed. Alternatively, in other embodiments, the computing unit 801 may be configured to perform the training method of the federal learning model in any other suitable manner (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present application may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this application, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server incorporating a blockchain.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps described in the present disclosure may be performed in parallel, sequentially, or in a different order, provided that the desired results of the technical solutions disclosed in the present application are achieved, and are not limited herein.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In the description of the present application, the meaning of "a plurality" is two or more, unless explicitly defined otherwise.
The foregoing is merely specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the present application, and the changes and substitutions are intended to be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (5)

1. The training system of the federal learning model is characterized by comprising a service end and a plurality of user ends, wherein different user ends are different hospitals in different areas;
the user terminal comprises:
the model training module is used for acquiring local data, wherein the local data are medical data of hospitals corresponding to the user side, and patient characteristics and detection standards corresponding to the local data of different hospitals are different; carrying out detection model training through local data to obtain a local model and a local model;
the parameter determining module is used for determining local weight parameters corresponding to the local model according to the difference degree of the local model and the local model;
the first sending module is used for sending the local model parameters and the local weight parameters corresponding to the local model to a server, so that the server updates the global model parameters according to the local model parameters and the local weight parameters;
the first receiving module is used for receiving updated global model parameters from the server;
the first updating module is used for updating the local model parameters and the local weight parameters according to the updated global model parameters to obtain updated local model parameters and updated local weight parameters;
The server side comprises:
the second receiving module is used for receiving the local model parameters and the local weight parameters from the plurality of user terminals;
a second updating module, configured to update global model parameters according to a plurality of the local model parameters and the local weight parameters;
the second sending module is used for sending the updated global model parameters to each user side so that each user side updates the local model parameters and the local weight parameters according to the updated global model parameters to obtain updated local model parameters and updated local weight parameters;
the parameter determining module comprises:
determining a local gradient vector from the local model;
determining a local gradient vector according to the local model;
determining a difference angle according to the local gradient vector and the local gradient vector;
determining local weight parameters corresponding to the local model based on the difference included angle;
wherein, the difference degree included angle is determined by adopting the following formula:
Figure QLYQS_1
in the course of this formula (ii) the formula,
Figure QLYQS_2
the method is used for representing the included angle of the difference degree;
Figure QLYQS_3
For characterizing a local gradient vector;
Figure QLYQS_4
For characterizing local gradient vectors;
the parameter determining module comprises:
Determining a sample ratio of local data to global data;
determining an adjustment factor according to the difference angle; wherein the difference included angle is inversely proportional to the numerical value of the adjustment factor;
integrating the sample proportion and the adjustment factor to obtain the local weight parameter;
the local weight parameters are determined using the following formula:
Figure QLYQS_5
in the course of this formula (ii) the formula,
Figure QLYQS_6
for characterization and->
Figure QLYQS_7
Personal client->
Figure QLYQS_8
Local weight parameters corresponding to the secondary iteration;
Figure QLYQS_9
And->
Figure QLYQS_10
Is a preset super parameter;
Figure QLYQS_11
For characterising +.>
Figure QLYQS_12
Sample size of local data at individual clients.
2. The system of claim 1, wherein the model training module comprises:
the receiving submodule is used for receiving the initialization model from the server;
the setting sub-module is used for setting a first model parameter of the initialization model to obtain an initialization local model;
the setting submodule is further used for setting second model parameters of the initialization model to obtain an initialization local model;
the training sub-module is used for training the initialized local model through the local data to obtain the local model;
The training sub-module is further configured to train the initialized local model through the local data, and obtain the local model.
3. The system of claim 1, wherein the second update module comprises:
the normalization sub-module is used for carrying out normalization processing on the local weight parameters to obtain global weight parameters;
and the aggregation sub-module is used for aggregating according to the local model parameters and the global weight parameters so as to update the parameters of the global model and obtain the parameters of the updated global model.
4. A computer device, comprising: a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the system of any one of claims 1-3 when executing the program.
5. A storage medium containing computer executable instructions for performing the system of any one of claims 1-3 when executed by a computer processor.
CN202111248479.7A 2021-10-26 2021-10-26 Training method, device, system, storage medium and equipment for federal learning model Active CN113837399B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111248479.7A CN113837399B (en) 2021-10-26 2021-10-26 Training method, device, system, storage medium and equipment for federal learning model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111248479.7A CN113837399B (en) 2021-10-26 2021-10-26 Training method, device, system, storage medium and equipment for federal learning model

Publications (2)

Publication Number Publication Date
CN113837399A CN113837399A (en) 2021-12-24
CN113837399B true CN113837399B (en) 2023-05-30

Family

ID=78966142

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111248479.7A Active CN113837399B (en) 2021-10-26 2021-10-26 Training method, device, system, storage medium and equipment for federal learning model

Country Status (1)

Country Link
CN (1) CN113837399B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114490695A (en) * 2022-02-22 2022-05-13 光大科技有限公司 A global model update processing method and device
CN114330759B (en) * 2022-03-08 2022-08-02 富算科技(上海)有限公司 Training method and system for longitudinal federated learning model
TWI800304B (en) * 2022-03-16 2023-04-21 英業達股份有限公司 Fedrated learning system using synonym
CN114844889B (en) * 2022-04-14 2023-07-07 北京百度网讯科技有限公司 Video processing model updating method and device, electronic equipment and storage medium
CN114707662B (en) * 2022-04-15 2024-06-18 支付宝(杭州)信息技术有限公司 Federal learning method, federal learning device and federal learning system
CN114741611B (en) * 2022-06-08 2022-10-14 杭州金智塔科技有限公司 Federal recommendation model training method and system
CN115145966B (en) * 2022-09-05 2022-11-11 山东省计算中心(国家超级计算济南中心) Comparison federated learning method and system for heterogeneous data

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5586033A (en) * 1992-09-10 1996-12-17 Deere & Company Control system with neural network trained as general and local models
CN111723947A (en) * 2020-06-19 2020-09-29 深圳前海微众银行股份有限公司 A training method and device for a federated learning model
CN112364943A (en) * 2020-12-10 2021-02-12 广西师范大学 Federal prediction method based on federal learning
CN113112027A (en) * 2021-04-06 2021-07-13 杭州电子科技大学 Federal learning method based on dynamic adjustment model aggregation weight

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5586033A (en) * 1992-09-10 1996-12-17 Deere & Company Control system with neural network trained as general and local models
CN111723947A (en) * 2020-06-19 2020-09-29 深圳前海微众银行股份有限公司 A training method and device for a federated learning model
CN112364943A (en) * 2020-12-10 2021-02-12 广西师范大学 Federal prediction method based on federal learning
CN113112027A (en) * 2021-04-06 2021-07-13 杭州电子科技大学 Federal learning method based on dynamic adjustment model aggregation weight

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
置信规则库专家系统学习优化问题的研究;常瑞;白杨森;孟庆涛;;华北水利水电大学学报(自然科学版)(第04期);全文 *

Also Published As

Publication number Publication date
CN113837399A (en) 2021-12-24

Similar Documents

Publication Publication Date Title
CN113837399B (en) Training method, device, system, storage medium and equipment for federal learning model
CN112561078A (en) Distributed model training method, related device and computer program product
US20240005210A1 (en) Data protection method, apparatus, medium and device
CN113010896B (en) Method, apparatus, device, medium and program product for determining abnormal object
CN113705362B (en) Training method and device of image detection model, electronic equipment and storage medium
CN113378911B (en) Image classification model training method, image classification method and related device
CN113642711B (en) Processing method, device, equipment and storage medium of network model
CN114500339B (en) Node bandwidth monitoring method and device, electronic equipment and storage medium
CN113627536A (en) Model training method, video classification method, device, equipment and storage medium
CN113205495A (en) Image quality evaluation and model training method, device, equipment and storage medium
CN114742237A (en) Federal learning model aggregation method and device, electronic equipment and readable storage medium
CN114242100B (en) Audio signal processing method, training method, device, equipment and storage medium thereof
CN113037489B (en) Data processing method, device, equipment and storage medium
CN113344213A (en) Knowledge distillation method, knowledge distillation device, electronic equipment and computer readable storage medium
CN116341680A (en) Artificial intelligence model adaptation method, device, electronic equipment and storage medium
CN119026710B (en) Personalized robust aggregation method and device based on key parameters and storage medium
CN114844889B (en) Video processing model updating method and device, electronic equipment and storage medium
CN114494818B (en) Image processing method, model training method, related device and electronic equipment
CN115334321B (en) Method and device for acquiring access heat of video stream, electronic equipment and medium
EP4036861A2 (en) Method and apparatus for processing point cloud data, electronic device, storage medium, computer program product
CN113221999B (en) Picture annotation accuracy obtaining method and device and electronic equipment
CN117933390A (en) Model mixing precision determination method, device, equipment and storage medium
CN113038357B (en) Indoor positioning method and electronic equipment
CN111179310B (en) Video data processing method, device, electronic equipment and computer readable medium
CN116827703A (en) Virtual interaction control method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant