WO2023120776A1 - Device-to-device knowledge transmission method using proxy dataset in federated learning, and system therefor - Google Patents
Device-to-device knowledge transmission method using proxy dataset in federated learning, and system therefor Download PDFInfo
- Publication number
- WO2023120776A1 WO2023120776A1 PCT/KR2021/019745 KR2021019745W WO2023120776A1 WO 2023120776 A1 WO2023120776 A1 WO 2023120776A1 KR 2021019745 W KR2021019745 W KR 2021019745W WO 2023120776 A1 WO2023120776 A1 WO 2023120776A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- knowledge
- management server
- devices
- neural network
- artificial neural
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 27
- 230000005540 biological transmission Effects 0.000 title abstract 2
- 230000006870 function Effects 0.000 claims abstract description 171
- 238000013528 artificial neural network Methods 0.000 claims abstract description 91
- 238000010801 machine learning Methods 0.000 claims abstract description 85
- 238000007726 management method Methods 0.000 claims description 122
- 238000010606 normalization Methods 0.000 claims description 28
- 238000012546 transfer Methods 0.000 claims description 17
- 230000001537 neural effect Effects 0.000 claims 1
- 238000013500 data storage Methods 0.000 description 10
- 230000004913 activation Effects 0.000 description 9
- 238000004891 communication Methods 0.000 description 8
- 238000011160 research Methods 0.000 description 6
- 238000013473 artificial intelligence Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 238000013140 knowledge distillation Methods 0.000 description 4
- 238000009826 distribution Methods 0.000 description 3
- 230000014509 gene expression Effects 0.000 description 3
- 238000012827 research and development Methods 0.000 description 3
- 230000001419 dependent effect Effects 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000002250 progressing effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
- G06N20/20—Ensemble learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computing arrangements using knowledge-based models
- G06N5/02—Knowledge representation; Symbolic representation
Definitions
- the present invention relates to a method and system for transferring knowledge between devices using a proxy data set in federated learning.
- the present invention is supported by group research support (R&D) of the Ministry of Science and ICT (Task number: 1711135177, task number: 2020R1A4A1018607, research task name: Meta Federated Learning-based mobile edge computing system core structure development, task management institution: National Research Foundation of Korea, Project executing agency: Kyunghee University (International Campus) Industry-Academic Cooperation Foundation, research period: 2020.07.01. ⁇ 2023.02.28.) and Information and Communication Technology Innovation Talent Fostering (R&D) (Task identification number: 1711139517, task number: 2021-0- 02068-001, Research project title: Research and development of artificial intelligence innovation hub, Project management agency: Information and Communication Planning and Evaluation Institute, Project executing agency: Kyunghee University (International Campus) Industry-University Cooperation Foundation, Research period: 2020.07.01. ⁇ 2025.12.31.) It was derived from a study conducted as part of On the other hand, there is no property interest of the Korean government in any aspect of the present invention.
- Recent mobile applications are equipped with various AI functions such as AI-based cameras, extended reality (XR), and intelligent assistants based on user data.
- AI functions that use centralized machine learning (ML) as above inevitably have concerns about leakage of personal information data.
- the AI function using machine learning based on federated learning (FL) as described above may deteriorate the function of machine learning learning because there is heterogeneity of learning data in the central computing device and various user devices, and consequently, the machine learning learning model difference occurs.
- the technical problem to be solved by the present invention is a method and system for transferring knowledge between devices using a proxy dataset in federated learning to prevent leakage of personal information data in an AI learning process using machine learning based on federated learning (FL). It is about.
- the technical problem to be solved by the present invention is to use a proxy dataset in federated learning to improve the learning function of machine learning while minimizing the communication data load in the AI learning process using machine learning based on federated learning (FL). It relates to a method and system for transferring knowledge between devices.
- a knowledge transfer method between devices using a proxy dataset in federated learning includes the steps of downloading proxy data, machine learning of input data using a first artificial neural network function in a management server, and outputting the information to the management server. Generating knowledge, generating a plurality of device output knowledge by machine learning input data using a second artificial neural network function in the first to nth devices, and stopping and managing machine learning when a preset number of times is reached and transmitting server output knowledge to first to nth devices and transmitting a plurality of device output knowledge to a management server.
- the step of machine learning the input data using the first artificial neural network function in the management server according to an embodiment of the present invention to generate management server output knowledge may include a plurality of n-th devices provided from the first to n-th devices. It includes receiving output knowledge and generating a first loss function based on the nth management server output knowledge and proxy data generated by the first artificial neural network function based on the input data.
- the step of generating management server output knowledge by machine learning the input data using the first artificial neural network function in the management server includes a plurality of n-th device output knowledge and n-th management server output Generating a first normalization function based on the knowledge and proxy data, generating a first artificial neural network function based on the first loss function and the first regularization function, and machine learning the input data in the first artificial neural network function. and generating an n+1th management server output knowledge.
- the step of generating a plurality of device output knowledge by machine learning the input data using the second artificial neural network function in the first to nth devices is n+1 provided by the management server.
- Receiving management server output knowledge uploading a plurality of n-th device personal data generated from each of the first to n-th devices, and each of a plurality of n-th device output knowledge generated based on the input data and a plurality of n-th device output knowledge and generating a plurality of second loss functions based on each of n device personal data.
- each of the plurality of nth device output knowledge Generating a plurality of second normalization functions based on the n+1th management server output knowledge and proxy data, a plurality of second artificial neural network functions based on the plurality of second loss functions and the plurality of second normalization functions Generating and machine learning the input data based on a plurality of second artificial neural network functions and generating a plurality of n+1th device output knowledge.
- the management server output knowledge is transmitted to the first to n th devices, and a plurality of device output knowledge is transmitted to the management server.
- the step of doing is, when the preset number of times is reached, stopping machine learning of the input data in the first artificial neural network function and transmitting the n+1 management server output knowledge to the first to nth devices and reaching the preset number of times In this case, stopping the machine learning of the input data in the second artificial neural network function and transmitting the output knowledge of the plurality of n+1 management devices to the management server.
- a knowledge transfer system between devices using a proxy dataset in federated learning uses proxy data and a first artificial neural network function to perform machine learning on input data and generate management server output knowledge. Includes server, proxy data and a second artificial neural network function to machine learn input data and generate a plurality of device output knowledge, and a control server that stops machine learning when a preset number of times is reached do.
- the management server includes a first input unit that receives output knowledge from a plurality of n-th devices from first to n-th devices, and a first input unit generated by a first artificial neural network function based on input data.
- Generating a first loss function based on n management server output knowledge and proxy data generating a first normalization function based on a plurality of n th device output knowledge, n th management server output knowledge, and proxy data, and generating a first loss function and a first modeling unit for generating a first artificial neural network function based on a first normalization function, and output knowledge of the n+1 management server, which is a result of machine learning of input data from the first artificial neural network function, to the first to nth devices. It includes a first output unit that outputs to.
- each of the 1st to nth devices includes a second input unit for receiving the n+1th management server output knowledge, and a device that has learned from the pre-stored data set of the nth individual and the proxy data set.
- a device data set storage unit for uploading the knowledge of, a second loss function is generated based on the n-th device output knowledge generated based on the input data and the n-th device personal data stored in advance, and the n-th device output knowledge, n+1-th device output knowledge
- control server stops machine learning of the input data in the first artificial neural network function when the preset number of times is reached, and transfers the n+1 management server output knowledge to the first to nth devices. and when the preset number of times is reached, the machine learning of the input data is stopped in the second artificial neural network function, and a plurality of n+1 management device output knowledge is controlled to be transmitted to the management server.
- non-transitory recording medium that can be printed as a computer on which a program for executing a knowledge transfer method between devices using a proxy dataset in federated learning according to an embodiment of the present invention is recorded.
- the method and system for transferring knowledge between devices using proxy datasets in federated learning according to the present invention can prevent leakage of personal information data in the AI learning process using machine learning based on federated learning (FL).
- the knowledge transfer method and system between devices using proxy datasets in federated learning minimizes the load of communication data and increases the learning speed in the AI learning process using machine learning based on federated learning (FL). It can improve the learning function of machine learning.
- FIG. 1 is a diagram of a knowledge transfer system between devices using a proxy dataset in federated learning according to an embodiment of the present invention.
- FIG. 2 is a diagram illustrating a process of performing machine learning using an artificial neural network function in a management server and a plurality of devices according to an embodiment of the present invention.
- FIG. 3 is a flowchart illustrating a process of performing machine learning in a management server according to an embodiment of the present invention.
- FIG. 4 is a flowchart illustrating a process of performing machine learning using an artificial neural network function in a plurality of devices according to an embodiment of the present invention.
- the expression "the same” in the description may mean “substantially the same”. That is, it may be the same to the extent that a person with ordinary knowledge can understand that it is the same.
- Other expressions may also be expressions in which "substantially” is omitted.
- FIG. 1 is a diagram of a knowledge transfer system between devices using a proxy dataset in federated learning according to an embodiment of the present invention.
- the knowledge transfer system 1 between devices using a proxy dataset may include a management server 20 and a plurality of devices 40.
- the management server 20 may include a first input unit 22 , a first modeling unit 23 , and a first output unit 24 .
- the first input unit 22 may download proxy data.
- the first input unit 22 may receive input of a plurality of device output knowledge (g1, ..., gn, see FIG. 2 ), which is a result of machine learning by the plurality of devices 40 .
- the first modeling unit 23 includes a plurality of device output knowledge (g1, ..., gn) provided from the first input unit 22, proxy data, and the first modeling unit 23 (or global (global) A first artificial neural network for machine learning the input data (x1) based on the management server output knowledge (gs, see FIG. 2), which is the result of machine learning from the first artificial neural network function (ANN(1)) stored in advance in the model).
- the function ANN(1) can be recreated.
- the first modeling unit 23 includes the management server output knowledge (gs), which is a result of machine learning from the first artificial neural network function (ANN(1)) stored in advance in the first modeling unit 23, and uploaded proxy data.
- gs management server output knowledge
- a first loss function may be generated based on.
- the first loss function generated by the first modeling unit 23 is the difference between the downloaded proxy data and the management server output knowledge (gs), which is the result of machine learning from the first artificial neural network function (ANN(1)) stored in advance. As a function, it can be generated using knowledge distillation (KD).
- the first modeling unit 23 includes management server output knowledge (gs), which is a result of machine learning in the first artificial neural network function (ANN(1)), downloaded proxy data, and a plurality of device output knowledge (g1, . .., a first normalization function may be generated based on gn).
- the first normalization function generated by the first modeling unit 23 is the management server output knowledge (gs), which is a result of machine learning in the first artificial neural network function (ANN(1)) based on the downloaded proxy data, and a plurality of devices. It represents the normalized value between output knowledge (g1, ..., gn) as a function, and can be generated using Kullback-Leiber divergence (KLD) and Jensen-Shannon divergence divergence.
- KLD Kullback-Leiber divergence
- JLD Jensen-Shannon divergence divergence
- the Kullback-Leiber divergence is a function used to calculate the difference between two probability distributions.
- the information entropy that would occur if data were sampled using another distribution that approximated the ideal distribution. means the difference between The Jensen-Shannon divergence divergence uses the Kullback-Leiber divergence (KLD) as a distance concept, and calculates the difference between two information entropies based on the Kullback-Leiber divergence (KLD). and how to average it.
- the first modeling unit 23 may regenerate the first artificial neural network function ANN(1) for machine learning of the input data x1 based on the first loss function and the first normalization function.
- the first output unit 24 provides the management server output knowledge gs, which is a result of machine learning of the input data x1 based on the first artificial neural network function ANN(1), to a plurality of devices 40.
- the management server output knowledge gs output from the first output unit 24 may be provided to a plurality of devices 40 through the network 30 .
- the plurality of devices 40 may include a device data storage unit 41 , a second input unit 42 , a second modeling unit 43 , and a second output unit 44 .
- the device data storage unit 41 may store device personal data of the device 40 .
- the device personal data relates to data of a user using the device 40 and may correspond to various types such as images, texts, and voice files.
- Device personal data stored in the device data storage unit 41 of the plurality of devices 40 may be different according to user data.
- the second input unit 42 may download proxy data.
- the second input unit 42 may receive management server output knowledge (gs), which is a result of machine learning in the management server 20 .
- the second modeling unit 43 includes management server output knowledge (gs) provided from the second input unit 42, device personal data provided from the device data storage unit 41, and previously stored in the second modeling unit 43. Based on the device output knowledge (g1, ..., gn), which is the result of machine learning in the second artificial neural network function (ANN (2 (1), ..., ANN (2 (n)), input data (x2) A second artificial neural network function (ANN(2(1), ..., ANN(2(n))) for machine learning can be recreated.
- ANN second artificial neural network function
- the second modeling unit 43 may generate a second loss function based on the personal device data provided from the device data storage unit 41 and the downloaded proxy data.
- the second loss function generated by the second modeling unit 43 represents the difference between the downloaded proxy data and the device personal data provided from the device data storage unit 41 as a function, and has the same knowledge as the first artificial neural network function. It can be produced using knowledge distillation (KD).
- the second modeling unit 43 outputs device output knowledge (g1, ..., which is a result of machine learning from the second artificial neural network function (ANN(2(1), ..., ANN(2(n))). gn), the uploaded proxy data, and the management server output knowledge (gs) provided from the management server 20 may generate a second normalization function.
- the second normalization function generated by the second modeling unit 43 is machine learning from the second artificial neural network function (ANN(2(1), ..., ANN(2(n))) based on the downloaded proxy data.
- the second artificial neural network function ANN(2(1), ..., ANN(2 (n)
- KLD Kullback-Leiber divergence
- the second modeling unit 43 generates a second artificial neural network function (ANN(2(1), ..., ANN(2 (n)) can be reproduced.
- ANN(2(1), ..., ANN(2 (n)) a second artificial neural network function
- the second output unit 44 outputs device output knowledge (which is a result of machine learning of the input data (x2) based on the second artificial neural network function (ANN(2(1), ..., ANN(2(n))) g1, ..., gn can be provided to the management server 20.
- Device output knowledge (g1, ..., gn) output from the second output unit 44 is managed through the network 30 It may be provided to the server 20.
- the control server 50 may stop performing machine learning of the input data x1 in the first artificial neural network function ANN(1) when the preset number of times is exceeded.
- the control server 50 may stop machine learning the input data (x2) in the second artificial neural network function (ANN(2(1), ..., ANN(2(n))) when the preset number of times is exceeded. there is.
- FIG. 2 is a diagram illustrating a process of performing machine learning using an artificial neural network function in a management server and a plurality of devices according to an embodiment of the present invention.
- the first artificial neural network function (ANN(1)) generated by the first modeling unit 23 included in the management server 20 includes the first input layer IL(1) and the first hidden layer HL(1) , and the first output layer OL(1).
- Input data x1 may be applied to the first input layer IL( 1 ).
- Management server output value data zs may be generated while the input data x1 applied to the first input layer IL(1) passes through the first hidden layer HL(1).
- the management server output value data zs generated through the first hidden layer HL(1) may be applied to the first output layer OL(1).
- the first hidden layer HL(1) may be configured by various activation functions, and the input data x1 applied to the first hidden layer HL(1) is dependent on various activation functions.
- the weight value may be multiplied by and converted into management server output value data (zs).
- characteristic value data (or es).
- the management server output value data (zs) includes information about the characteristic value data (es) multiplied by the input data (x1) by an activation function.
- the management server output value data zs generated in the first output layer OL( 1 ) of the first modeling unit 23 may be applied to the first output unit 24 .
- the first output unit 24 transmits the management server output knowledge gs to a plurality of devices ( 40) can be provided.
- the second artificial neural network function (ANN(2(1)), ..., ANN(2(n))) generated by the second modeling unit 43 included in the plurality of devices 40 is the second input layer (IL(2)), a second hidden layer (HL(2)), and a second output layer (OL(2)).
- Input data x2 may be applied to the second input layer IL( 2 ).
- Device output value data z1, ..., zn may be generated while the input data x2 applied to the second input layer IL(2) passes through the second hidden layer HL(2).
- the device output value data z1, ..., zn generated through the second hidden layer HL(2) may be applied to the second output layer OL(2).
- the second hidden layer HL(2) may be configured by various activation functions, and the input data x2 applied to the second hidden layer HL(2) is dependent on various activation functions.
- the weight value may be multiplied by and converted into device output value data (z1, ..., zn).
- each of the weight values multiplied by the input data x2 by an activation function in the plurality of devices 40 is referred to as characteristic value data (or e1, ..., en).
- the device output value data (or z1, ..., zn) is the characteristic value data (or e1, ..., en ) contains information about
- a plurality of device output value data (z1, ..., zn) generated in the second output layer (OL(2)) of the second modeling unit 43 may be applied to the second output unit 44.
- the second output unit 44 is based on a plurality of device output value data (z1, ..., zn) including information on the characteristic value data (e1, ..., en) through the network 30
- a plurality of device output knowledge (g1, ..., gn) can be provided to the management server 20.
- the management server 20 converts the weight value multiplied to the input data (x1) by an activation function into characteristic value data (or, es) or characteristic value data (es). Since the management server output knowledge (gs) is provided to a plurality of devices 40 based on the device output value data (zs) including information about, the communication speed can be increased by minimizing the load of communication data.
- the plurality of devices 40 use the weight value multiplied by the input data (x2) by an activation function as characteristic value data (e1, ..., en) or characteristic value data (e1, ..., en ) Since a plurality of device output knowledge (g1, ..., gn) is provided to the management server 20 based on a plurality of device output value data (z1, ..., zn) including information about communication data The communication speed can be increased by minimizing the load.
- the second artificial intelligence neural network (ANN(2(1)), ..., ANN(2(n))) generated by the second modeling unit 43 of the plurality of devices 40 is a plurality of devices 40
- the plurality of devices 40 provide the management server 20 with characteristic value data (e1, ..., en) or device output value data (z1, ..., zn), device personal data is not transmitted. Accordingly, since the user's personal information data is not transmitted to the management server 20 in the plurality of devices 40, leakage of personal information data can be prevented.
- the second artificial intelligence neural network (ANN(2(1)), ..., ANN(2(n))) generated by the second modeling unit 43 of the plurality of devices 40 is A first artificial intelligence neural network (ANN(1)) generated based on the management server output knowledge (gs) based on the provided management server output value data (zs) and generated by the first modeling unit 23 of the management server 20 is generated based on a plurality of device output knowledge (z1, ..., zn) based on device output value data (z1, ..., zn) provided from a plurality of devices (40), so learning from the management server (20)
- the heterogeneity of the learning data to be learned and the learning data to be learned from the plurality of devices 40 can be reduced, so that the function of machine learning learning can be increased.
- FIG. 3 is a flowchart illustrating a process of performing machine learning in a management server according to an embodiment of the present invention.
- Proxy data can be downloaded in step S10.
- the first input unit 22 of the management server 20 may download proxy data.
- step S11 a plurality of n-th device output knowledge provided by a plurality of devices may be input.
- the first input unit 22 of the management server 20 is based on the second artificial neural network function (ANN(2(1), ..., ANN(2(n)) in the plurality of devices 40).
- ANN(2(1), ..., ANN(2(n)) in the plurality of devices 40.
- a plurality of generated n-th device output knowledge (g1, ..., gn) may be input.
- a first loss function may be generated using the nth management server output knowledge (gs) calculated through the first artificial neural network function (ANN(1)) and proxy data.
- the first modeling unit 23 of the management server 20 is based on the n-th management server output knowledge (gs), which is a result of machine learning in the first artificial neural network function (ANN(1)), and uploaded proxy data.
- gs n-th management server output knowledge
- ANN(1) first artificial neural network function
- n-th management server output knowledge (gs) calculated through the first artificial neural network function (ANN(1)) in step S13, a plurality of n-th device output knowledge (g1, ..., gn), and proxy data
- ANN(1) first artificial neural network function
- a first normalization function can be generated using
- the first modeling unit 23 of the management server 20 inputs n-th management server output knowledge (gs), which is a result of machine learning in the first artificial neural network function (ANN(1)), in step S11.
- n-th management server output knowledge gs
- ANN(1) first artificial neural network function
- a normalization function may be generated using the received plurality of n-th device output knowledge (g1, ..., gn) and proxy data.
- a first artificial neural network function may be regenerated based on the first loss function and the first normalization function.
- the first modeling unit 23 of the management server 20 generates a first artificial neural network function (ANN) based on the first loss function generated in step S12 and the first normalization function generated in step S13. (1)) can be reproduced.
- ANN artificial neural network function
- the regenerated first artificial neural network function may perform machine learning on the input data and generate the n+1 management server output knowledge (gs).
- the input data (x1) may be applied to the first input layer (IL(1)) of the first modeling unit 23 of the management server 20, and the first artificial neural network function (ANN(1)) Through this, the n+1th management server output knowledge (gs) can be generated.
- step S16 the n+1th management server output knowledge may be provided to each of a plurality of devices.
- control server 50 stops machine learning in the first modeling unit 23 when the preset number of times of learning is reached, and the first output unit 24 outputs the n+1 management server output knowledge (gs). may be provided to each of the plurality of devices 40 .
- FIG. 4 is a flowchart illustrating a process of performing machine learning using an artificial neural network function in a plurality of devices according to an embodiment of the present invention.
- Proxy data can be downloaded in step S20.
- the second input units 42 of the plurality of devices 40 may download proxy data.
- step S21 the n+1th management server output knowledge provided by the management server may be input.
- the second input unit 42 of the plurality of devices 40 may receive n+1th management server output knowledge gs provided from the management server 20 .
- step S22 a plurality of n-th device personal data generated by a plurality of devices may be uploaded.
- the plurality of devices 40 may upload the plurality of n-th device personal data stored in the device data storage unit 41 .
- a plurality of second loss functions may be generated based on the plurality of n-th device output knowledge and the plurality of n-th device personal data calculated through the second artificial neural network function.
- the second modeling unit 43 of the plurality of devices 40 uses the nth device personal data uploaded from the device data storage unit 41 and the second artificial neural network function (ANN(2(1), ... , A plurality of second loss functions may be generated based on a plurality of n-th device output knowledge (g1, ..., gn) calculated through ANN (2 (n)).
- the second modeling unit 43 of any one device 40 is the nth device personal data uploaded from the device data storage unit 41 of any one device 40 and any one device 40 )
- a second loss function may be generated based on the n-th device output knowledge calculated through the second artificial neural network function.
- the second modeling unit 43 of the other device 40 stores the nth device personal data uploaded from the device data storage unit 41 of the other device 40 and the other device 40.
- a second loss function may be generated based on the n-th device output knowledge calculated through the second artificial neural network function.
- step S25 a plurality of second normalization functions are generated by using the n-th device output knowledge calculated through the second artificial neural network function, the n+1-th management server output knowledge provided by the management server, and proxy data.
- the second modeling unit 43 of the plurality of devices 40 is the second artificial neural network function (n-th, which is the result of machine learning from ANN(2(1), ..., ANN(2(n)))
- a normalization function may be generated using the device output knowledge (g1, ..., gn), the n+1th management server output knowledge (gs) provided by the management server 20, and proxy data.
- a second artificial neural network function may be regenerated based on the second loss function and the second normalization function.
- the second modeling unit 43 of the plurality of devices 40 calculates the second artificial neural network function (based on the second loss function generated in step 24 and the second normalization function generated in step S27). ANN(2(1), ..., ANN(2(n)) can be recreated.
- input data may be machine-learned and a plurality of n+1 device output knowledge may be generated.
- the input data x2 may be applied to the second input layer IL2(1), ..., IL2(n) of the second modeling unit 43 of the plurality of devices 40, and A plurality of n+1th device output knowledge (g1, ..., gn) can be generated through the 2 artificial neural network functions (ANN(2(1), ..., ANN(2(n)).
- ANN(2(1), ..., ANN(2(n)) the 2 artificial neural network functions
- step S28 a plurality of n+1th device output knowledge and n+1th management server output knowledge may be compared.
- control server 50 stops machine learning in the second modeling unit 43 when the preset learning number is reached, and the second output unit 44 outputs n+1 management server output knowledge to the management server ( 20) can be provided.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Mathematical Physics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- General Health & Medical Sciences (AREA)
- Biophysics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Medical Informatics (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
Description
본 발명은 연합학습에서 프록시데이터 세트를 이용한 장치 간 지식 전달 방법 및 그 시스템에 관한 것이다.The present invention relates to a method and system for transferring knowledge between devices using a proxy data set in federated learning.
본 발명은 과학기술정보통신부의 집단연구지원(R&D)(과제고유번호: 1711135177, 과제번호: 2020R1A4A1018607, 연구과제명: Meta Federated Learning기반 이동엣지 컴퓨팅시스템 핵심구조 개발, 과제관리기관: 한국연구재단, 과제수행기관: 경희대학교(국제캠퍼스) 산학협력단, 연구기간: 2020.07.01.~2023.02.28.) 및 정보통신방속혁신인재양성(R&D)(과제고유번호: 1711139517, 과제번호: 2021-0-02068-001, 연구과제명: 인공지능 혁신 허브 연구 개발, 과제관리기관: 정보통신기획평가원, 과제수행기관: 경희대학교(국제캠퍼스) 산학협력단, 연구기간: 2020.07.01.~2025.12.31.)의 일환으로 수행한 연구로부터 도출된 것이다. 한편, 본 발명의 모든 측면에서 한국 정부의 재산 이익은 없다The present invention is supported by group research support (R&D) of the Ministry of Science and ICT (Task number: 1711135177, task number: 2020R1A4A1018607, research task name: Meta Federated Learning-based mobile edge computing system core structure development, task management institution: National Research Foundation of Korea, Project executing agency: Kyunghee University (International Campus) Industry-Academic Cooperation Foundation, research period: 2020.07.01.~2023.02.28.) and Information and Communication Technology Innovation Talent Fostering (R&D) (Task identification number: 1711139517, task number: 2021-0- 02068-001, Research project title: Research and development of artificial intelligence innovation hub, Project management agency: Information and Communication Planning and Evaluation Institute, Project executing agency: Kyunghee University (International Campus) Industry-University Cooperation Foundation, Research period: 2020.07.01.~2025.12.31.) It was derived from a study conducted as part of On the other hand, there is no property interest of the Korean government in any aspect of the present invention.
최근의 모바일 애플리케이션은 사용자의 데이터에 기반하여 AI기반 카메라, 증강확장현실(XR, Extended Reality), 및 지능형비서와 같은 다양한 AI 기능을 구비하고 있다. 다만, 위와 같은 중앙 집중식 머신러닝(ML)을 사용하는 AI기능은 필수적으로 개인정보데이터가 유출될 염려가 있다. Recent mobile applications are equipped with various AI functions such as AI-based cameras, extended reality (XR), and intelligent assistants based on user data. However, AI functions that use centralized machine learning (ML) as above inevitably have concerns about leakage of personal information data.
이에, 개인정보데이터의 유출을 방지할 수 있는 연합학습(FL)에 기반한 머신러닝(ML)을 사용하는 AI기능의 개발이 지속적으로 진행되고 있다. Accordingly, the development of AI functions using machine learning (ML) based on federated learning (FL) that can prevent leakage of personal information data is continuously progressing.
그러나, 위와 같은 연합학습(FL)에 기반한 머신러닝을 사용하는 AI기능은 중앙컴퓨팅장치와 다양한 사용자디바이스에서 학습데이터의 이질성이 존재하므로 머신러닝학습의 기능이 저하될 수 있고 결론적으로 머신러닝학습모델의 차이가 발생된다. However, the AI function using machine learning based on federated learning (FL) as described above may deteriorate the function of machine learning learning because there is heterogeneity of learning data in the central computing device and various user devices, and consequently, the machine learning learning model difference occurs.
이에, 개인정보데이터의 유출을 방지하는 동시에 중앙컴퓨팅장치와 사용자디바이스에서 머신러닝학습의 성능을 향상시키는 기술이 필요한 실정이다.Accordingly, there is a need for a technique for preventing the leakage of personal information data and at the same time improving the performance of machine learning learning in a central computing device and a user device.
본 발명이 해결하고자 하는 기술적 과제는, 연합학습(FL)에 기반한 머신러닝을 이용하는 AI 학습과정에서 개인정보데이터의 유출을 방지하기 위한 연합학습에서 프록시 데이터셋을 이용한 장치 간 지식 전달 방법 및 그 시스템에 관한 것이다.The technical problem to be solved by the present invention is a method and system for transferring knowledge between devices using a proxy dataset in federated learning to prevent leakage of personal information data in an AI learning process using machine learning based on federated learning (FL). It is about.
또한, 본 발명이 해결하고자 하는 기술적 과제는, 연합학습(FL)에 기반한 머신러닝을 이용하는 AI 학습과정에서 통신데이터부하를 최소화시키면서도 머신러닝의 학습기능을 향상시키기 위한 연합학습에서 프록시 데이터셋을 이용한 장치 간 지식 전달 방법 및 그 시스템에 관한 것이다.In addition, the technical problem to be solved by the present invention is to use a proxy dataset in federated learning to improve the learning function of machine learning while minimizing the communication data load in the AI learning process using machine learning based on federated learning (FL). It relates to a method and system for transferring knowledge between devices.
본 발명의 한 실시예에 따른 연합학습에서 프록시 데이터셋을 이용한 장치 간 지식 전달 방법은, 프록시데이터를 다운로드하는 단계, 관리서버에서 제1 인공신경망함수를 이용하여 입력데이터를 기계학습하여 관리서버출력지식을 생성하는 단계, 제1 내지 제n 디바이스에서 제2 인공신경망함수를 이용하여 입력데이터를 기계학습하여 복수의 디바이스출력지식을 생성하는 단계 및 미리 설정된 횟수에 도달한 경우 기계학습을 중단하고 관리서버출력지식을 제1 내지 제n 디바이스에 전송하고 복수의 디바이스출력지식을 관리서버에 전송하는 단계를 포함한다.A knowledge transfer method between devices using a proxy dataset in federated learning according to an embodiment of the present invention includes the steps of downloading proxy data, machine learning of input data using a first artificial neural network function in a management server, and outputting the information to the management server. Generating knowledge, generating a plurality of device output knowledge by machine learning input data using a second artificial neural network function in the first to nth devices, and stopping and managing machine learning when a preset number of times is reached and transmitting server output knowledge to first to nth devices and transmitting a plurality of device output knowledge to a management server.
또한, 본 발명의 한 실시예에 따른 관리서버에서 제1 인공신경망함수를 이용하여 입력데이터를 기계학습하여 관리서버출력지식을 생성하는 단계는, 제1 내지 제n 디바이스에서 제공된 복수의 제n 디바이스출력지식을 입력받는 단계 및 입력데이터에 기초하여 제1 인공신경망함수에서 생성된 제n 관리서버출력지식 및 프록시데이터에 기초하여 제1 손실함수를 생성하는 단계를 포함한다.In addition, the step of machine learning the input data using the first artificial neural network function in the management server according to an embodiment of the present invention to generate management server output knowledge may include a plurality of n-th devices provided from the first to n-th devices. It includes receiving output knowledge and generating a first loss function based on the nth management server output knowledge and proxy data generated by the first artificial neural network function based on the input data.
또한, 본 발명의 한 실시예에 따른 관리서버에서 제1 인공신경망함수를 이용하여 입력데이터를 기계학습하여 관리서버출력지식을 생성하는 단계는, 복수의 제n 디바이스출력지식, 제n 관리서버출력지식, 및 프록시데이터에 기초하여 제1 정규화함수를 생성하는 단계, 제1 손실함수 및 제1 정규화함수에 기초하여 제1 인공신경망함수를 생성하는 단계 및 제1 인공신경망함수에서 입력데이터를 기계학습하고 제n+1 관리서버출력지식을 생성하는 단계를 더 포함한다.In addition, the step of generating management server output knowledge by machine learning the input data using the first artificial neural network function in the management server according to an embodiment of the present invention includes a plurality of n-th device output knowledge and n-th management server output Generating a first normalization function based on the knowledge and proxy data, generating a first artificial neural network function based on the first loss function and the first regularization function, and machine learning the input data in the first artificial neural network function. and generating an n+1th management server output knowledge.
또한, 본 발명의 한 실시예에 따른 제1 내지 제n 디바이스에서 제2 인공신경망함수를 이용하여 입력데이터를 기계학습하여 복수의 디바이스출력지식을 생성하는 단계는, 관리서버에서 제공된 제n+1 관리서버출력지식을 입력받는 단계, 제1 내지 제n 디바이스 각각에서 생성된 복수의 제n 디바이스개인데이터를 업로드하는 단계 및 입력데이터에 기초하여 생성된 복수의 제n 디바이스출력지식 각각 및 복수의 제n 디바이스개인데이터 각각에 기초하여 복수의 제2 손실함수를 생성하는 단계를 포함한다.In addition, the step of generating a plurality of device output knowledge by machine learning the input data using the second artificial neural network function in the first to nth devices according to an embodiment of the present invention is n+1 provided by the management server. Receiving management server output knowledge, uploading a plurality of n-th device personal data generated from each of the first to n-th devices, and each of a plurality of n-th device output knowledge generated based on the input data and a plurality of n-th device output knowledge and generating a plurality of second loss functions based on each of n device personal data.
또한, 본 발명의 한 실시예에 따른 제1 내지 제n 디바이스에서 제2 인공신경망함수를 이용하여 입력데이터를 기계학습하여 복수의 디바이스출력지식을 생성하는 단계는, 복수의 제n 디바이스출력지식 각각, 제n+1 관리서버출력지식, 및 프록시데이터에 기초하여 복수의 제2 정규화함수를 생성하는 단계, 복수의 제2 손실함수 및 복수의 제2 정규화함수에 기초하여 복수의 제2 인공신경망함수를 생성하는 단계 및 복수의 제2 인공신경망함수에 기초하여 입력데이터를 기계학습하고 복수의 제n+1 디바이스출력지식을 생성하는 단계를 더 포함한다.In addition, the step of generating a plurality of device output knowledge by machine learning the input data using the second artificial neural network function in the first to nth devices according to an embodiment of the present invention, each of the plurality of nth device output knowledge , Generating a plurality of second normalization functions based on the n+1th management server output knowledge and proxy data, a plurality of second artificial neural network functions based on the plurality of second loss functions and the plurality of second normalization functions Generating and machine learning the input data based on a plurality of second artificial neural network functions and generating a plurality of n+1th device output knowledge.
또한, 본 발명의 한 실시예에 따른 미리 설정된 횟수에 도달한 경우 기계학습을 중단하고 관리서버출력지식을 제1 내지 제n 디바이스에 전송하고 복수의 디바이스출력지식을 관리서버에 전송하는 단계를 포함하는 단계는, 미리 설정된 횟수에 도달한 경우 제1 인공신경망함수에서 입력데이터의 기계학습을 중단하고 제n+1 관리서버출력지식을 제1 내지 제n 디바이스에 전송하는 단계 및 미리 설정된 횟수에 도달한 경우 제2 인공신경망함수에서 입력데이터의 기계학습을 중단하고 복수의 제n+1 관리디바이스출력지식을 관리서버에 전송하는 단계를 포함한다. 또한, 본 발명의 한 실시예에 따른 연합학습에서 프록시 데이터셋을 이용한 장치 간 지식 전달시스템은, 프록시데이터 및 제1 인공신경망함수를 이용하여 입력데이터를 기계학습하고 관리서버출력지식을 생성하는 관리서버, 프록시데이터 및 제2 인공신경망함수를 이용하여 입력데이터를 기계학습하고 복수의 디바이스출력지식을 생성하는 제1 내지 제n 디바이스 및 미리 설정된 횟수에 도달한 경우 기계학습을 중단하는 제어서버를 포함한다. In addition, when a preset number of times is reached according to an embodiment of the present invention, machine learning is stopped, the management server output knowledge is transmitted to the first to n th devices, and a plurality of device output knowledge is transmitted to the management server. The step of doing is, when the preset number of times is reached, stopping machine learning of the input data in the first artificial neural network function and transmitting the n+1 management server output knowledge to the first to nth devices and reaching the preset number of times In this case, stopping the machine learning of the input data in the second artificial neural network function and transmitting the output knowledge of the plurality of n+1 management devices to the management server. In addition, a knowledge transfer system between devices using a proxy dataset in federated learning according to an embodiment of the present invention uses proxy data and a first artificial neural network function to perform machine learning on input data and generate management server output knowledge. Includes server, proxy data and a second artificial neural network function to machine learn input data and generate a plurality of device output knowledge, and a control server that stops machine learning when a preset number of times is reached do.
또한, 본 발명의 한 실시예에 따른 상기 관리서버는, 제1 내지 제n 디바이스에서 복수의 제n 디바이스출력지식을 입력받는 제1 입력부, 입력데이터에 기초하여 제1 인공신경망함수에서 생성된 제n 관리서버출력지식 및 프록시데이터에 기초하여 제1 손실함수를 생성하고 복수의 제n 디바이스출력지식, 제n 관리서버출력지식, 및 프록시데이터에 기초하여 제1 정규화 함수를 생성하고 제1 손실함수 및 제1 정규화 함수에 기초하여 제1 인공신경망함수를 생성하는 제1 모델링부 및 제1 인공신경함수에서 입력데이터를 기계학습한 결과인 제n+1 관리서버출력지식을 제1 내지 제n 디바이스에 출력하는 제1 출력부를 포함한다.In addition, the management server according to an embodiment of the present invention includes a first input unit that receives output knowledge from a plurality of n-th devices from first to n-th devices, and a first input unit generated by a first artificial neural network function based on input data. Generating a first loss function based on n management server output knowledge and proxy data, generating a first normalization function based on a plurality of n th device output knowledge, n th management server output knowledge, and proxy data, and generating a first loss function and a first modeling unit for generating a first artificial neural network function based on a first normalization function, and output knowledge of the n+1 management server, which is a result of machine learning of input data from the first artificial neural network function, to the first to nth devices. It includes a first output unit that outputs to.
또한, 본 발명의 한 실시예에 따른 제1 내지 제n 디바이스 각각은, 제n+1 관리서버출력지식을 입력받는 제2 입력부, 미리 저장된 제n 개인의 데이터셋과 프록시 데이터셋으로 학습한 디바이스의 지식 업로드시키는 디바이스 데이터셋 저장부, 입력데이터에 기초하여 생성된 제n 디바이스출력지식 및 미리 저장된 제n 디바이스개인데이터에 기초하여 제2 손실함수를 생성하고 제n 디바이스출력지식, 제n+1 관리서버출력지식, 및 프록시데이터에 기초하여 제2 정규화함수를 생성하고, 제2 손실함수 및 제2 정규화함수에 기초하여 제2 인공신경망함수를 생성하는 제2 모델링부 및 제2 인공신경망함수에 기초하여 입력데이터를 기계학습하고 제n+1 디바이스출력지식을 생성하는 제2 출력부를 포함한다.In addition, each of the 1st to nth devices according to an embodiment of the present invention includes a second input unit for receiving the n+1th management server output knowledge, and a device that has learned from the pre-stored data set of the nth individual and the proxy data set. A device data set storage unit for uploading the knowledge of, a second loss function is generated based on the n-th device output knowledge generated based on the input data and the n-th device personal data stored in advance, and the n-th device output knowledge, n+1-th device output knowledge A second modeling unit for generating a second normalization function based on the management server output knowledge and proxy data, and generating a second artificial neural network function based on the second loss function and the second regularization function, and the second artificial neural network function and a second output unit for performing machine learning on the input data and generating n+1 th device output knowledge.
또한, 본 발명의 한 실시예에 따른 제어서버는, 미리 설정된 횟수에 도달한 경우 제1 인공신경망함수에서 입력데이터의 기계학습을 중단하고 제n+1 관리서버출력지식을 제1 내지 제n 디바이스에 전송하고 미리 설정된 횟수에 도달한 경우 제2 인공신경망함수에서 입력데이터의 기계학습을 중단하고 복수의 제n+1 관리디바이스출력지식을 관리서버에 전송하도록 제어한다.In addition, the control server according to an embodiment of the present invention stops machine learning of the input data in the first artificial neural network function when the preset number of times is reached, and transfers the n+1 management server output knowledge to the first to nth devices. and when the preset number of times is reached, the machine learning of the input data is stopped in the second artificial neural network function, and a plurality of n+1 management device output knowledge is controlled to be transmitted to the management server.
또한, 본 발명의 한 실시예에 따른 연합학습에서 프록시 데이터셋을 이용한 장치 간 지식 전달방법을 실행시키는 프로그램이 기록된 컴퓨터로 판된 가능한 비일시적 기록 매체를 포함한다.In addition, it includes a non-transitory recording medium that can be printed as a computer on which a program for executing a knowledge transfer method between devices using a proxy dataset in federated learning according to an embodiment of the present invention is recorded.
본 발명에 따른 연합학습에서 프록시 데이터셋을 이용한 장치 간 지식 전달방법 및 그 시스템은 연합학습(FL)에 기반한 머신러닝을 이용하는 AI 학습과정에서 개인정보데이터의 유출을 방지할 수 있다.The method and system for transferring knowledge between devices using proxy datasets in federated learning according to the present invention can prevent leakage of personal information data in the AI learning process using machine learning based on federated learning (FL).
또한, 본 발명에 따른 연합학습에서 프록시 데이터셋을 이용한 장치 간 지식 전달방법 및 그 시스템은 연합학습(FL)에 기반한 머신러닝을 이용하는 AI 학습과정에서 통신데이터의 부하를 최소화시키고 학습 속도를 증가시켜 기계 학습의 학습기능을 향상시킬 수 있다.In addition, the knowledge transfer method and system between devices using proxy datasets in federated learning according to the present invention minimizes the load of communication data and increases the learning speed in the AI learning process using machine learning based on federated learning (FL). It can improve the learning function of machine learning.
도 1은 본 발명의 한 실시예에 따른 연합학습에서 프록시 데이터셋을 이용한 장치 간 지식 전달시스템에 관한 도면이다. 1 is a diagram of a knowledge transfer system between devices using a proxy dataset in federated learning according to an embodiment of the present invention.
도 2는 본 발명의 한 실시예에 따른 관리서버 및 복수의 디바이스에서 인공신경망함수를 이용하여 머신러닝을 수행하는 과정을 설명하는 도면이다. 2 is a diagram illustrating a process of performing machine learning using an artificial neural network function in a management server and a plurality of devices according to an embodiment of the present invention.
도 3은 본 발명의 한 실시예에 따른 관리서버에서 머신러닝을 수행하는 과정을 설명하는 흐름도이다. 3 is a flowchart illustrating a process of performing machine learning in a management server according to an embodiment of the present invention.
도 4는 본 발명의 한 실시예에 따른 복수의 디바이스에서 인공신경망함수를 이용하여 머신러닝을 수행하는 과정을 설명하는 흐름도이다.4 is a flowchart illustrating a process of performing machine learning using an artificial neural network function in a plurality of devices according to an embodiment of the present invention.
이하, 첨부한 도면을 참고로 하여 본 발명의 여러 실시 예들에 대하여 본 발명이 속하는 기술 분야에서 통상의 지식을 가진 자가 용이하게 실시할 수 있도록 상세히 설명한다. 본 발명은 여러 가지 상이한 형태로 구현될 수 있으며 여기에서 설명하는 실시 예들에 한정되지 않는다.Hereinafter, various embodiments of the present invention will be described in detail with reference to the accompanying drawings so that those skilled in the art can easily carry out the present invention. The present invention may be embodied in many different forms and is not limited to the embodiments set forth herein.
본 발명을 명확하게 설명하기 위해서 설명과 관계없는 부분은 생략하였으며, 명세서 전체를 통하여 동일 또는 유사한 구성요소에 대해서는 동일한 참조 부호를 붙이도록 한다. 따라서 앞서 설명한 참조 부호는 다른 도면에서도 사용할 수 있다.In order to clearly describe the present invention, parts irrelevant to the description are omitted, and the same reference numerals are assigned to the same or similar components throughout the specification. Therefore, the reference numerals described above can be used in other drawings as well.
또한, 도면에서 나타난 각 구성의 크기 및 두께는 설명의 편의를 위해 임의로 나타내었으므로, 본 발명이 반드시 도시된 바에 한정되지 않는다. 도면에서 여러 층 및 영역을 명확하게 표현하기 위하여 두께를 과장되게 나타낼 수 있다.In addition, since the size and thickness of each component shown in the drawings are arbitrarily shown for convenience of explanation, the present invention is not necessarily limited to the shown bar. In the drawing, the thickness may be exaggerated to clearly express various layers and regions.
또한, 설명에서 "동일하다"라고 표현한 것은, "실질적으로 동일하다"는 의미일 수 있다. 즉, 통상의 지식을 가진 자가 동일하다고 납득할 수 있을 정도의 동일함일 수 있다. 그 외의 표현들도 "실질적으로"가 생략된 표현들일 수 있다.In addition, the expression "the same" in the description may mean "substantially the same". That is, it may be the same to the extent that a person with ordinary knowledge can understand that it is the same. Other expressions may also be expressions in which "substantially" is omitted.
도 1은 본 발명의 한 실시예에 따른 연합학습에서 프록시 데이터셋을 이용한 장치 간 지식 전달시스템에 관한 도면이다. 1 is a diagram of a knowledge transfer system between devices using a proxy dataset in federated learning according to an embodiment of the present invention.
본 발명의 한 실시예에 따른 연합학습에서 프록시 데이터셋을 이용한 장치 간 지식 전달시스템(1)은 관리서버(20) 및 복수의 디바이스(40)를 포함할 수 있다.In federated learning according to an embodiment of the present invention, the knowledge transfer system 1 between devices using a proxy dataset may include a
관리서버(20)는 제1 입력부(22), 제1 모델링부(23), 및 제1 출력부(24)를 포함할 수 있다.The
제1 입력부(22)는 프록시데이터를 다운로드 할 수 있다. 제1 입력부(22)는 복수의 디바이스(40)에서 기계학습한 결과물인 복수의 디바이스출력지식(g1, ..., gn, 도 2 참고)을 입력 받을 수 있다.The
제1 모델링부(23)는 제1 입력부(22)에서 제공되는 복수의 디바이스출력지식(g1, ..., gn), 프록시데이터, 및 제1 모델링부(23)(또는, 글로벌(전역) 모델)에 미리 저장된 제1 인공신경망함수(ANN(1))에서 기계학습한 결과물인 관리서버출력지식(gs, 도 2 참고)에 기초하여 입력데이터(x1)를 기계학습하기 위한 제1 인공신경망함수(ANN(1))를 재생성할 수 있다. The
구체적으로, 제1 모델링부(23)는 제1 모델링부(23)에 미리 저장된 제1 인공신경망함수(ANN(1))에서 기계학습한 결과물인 관리서버출력지식(gs)과 업로드된 프록시데이터에 기초하여 제1 손실함수를 생성할 수 있다.Specifically, the
제1 모델링부(23)가 생성하는 제1 손실함수는 다운로드된 프록시데이터와 미리 저장된 제1 인공신경망함수(ANN(1))에서 기계학습한 결과물인 관리서버출력지식(gs)의 차이 값을 함수로 나타낸 것으로서, 지식증류법(KD, Knowledge Distillation)을 이용하여 생성될 수 있다. The first loss function generated by the
또한, 제1 모델링부(23)는 제1 인공신경망함수(ANN(1))에서 기계학습한 결과물인 관리서버출력지식(gs), 다운로드된 프록시데이터, 및 복수의 디바이스출력지식(g1, ..., gn)에 기초하여 제1 정규화함수를 생성할 수 있다. In addition, the
제1 모델링부(23)가 생성하는 제1 정규화함수는 다운로드된 프록시데이터를 기준으로 제1 인공신경망함수(ANN(1))에서 기계학습한 결과물인 관리서버출력지식(gs) 및 복수의 디바이스출력지식(g1, ..., gn) 간의 정규화 값을 함수로 나타낸 것으로서, 쿨백-라이블러 발산(Kullback-Leiber divergence, KLD) 및 Jensen-Shannon divergence 발산 등을 이용하여 생성될 수 있다. The first normalization function generated by the
쿨백-라이블러 발산(Kullback-Leiber divergence, KLD)이란 두개의 확률분포의 차이를 계산하는데 사용되는 함수로서, 이상적인 분포에 대해서 그 분포에 근사하는 다른 분포를 사용해 데이터를 샘플링 한다면 발생할 수 있는 정보 엔트로피의 차이를 의미한다. Jensen-Shannon divergence 발산은 쿨백-라이블러 발산(Kullback-Leiber divergence, KLD)을 거리 개념으로 사용하는 것으로서, 쿨백-라이블러 발산(Kullback-Leiber divergence, KLD)에 기초하여 2개의 정보 엔트로피 차이를 산출하고 평균을 내는 방식을 의미한다.The Kullback-Leiber divergence (KLD) is a function used to calculate the difference between two probability distributions. The information entropy that would occur if data were sampled using another distribution that approximated the ideal distribution. means the difference between The Jensen-Shannon divergence divergence uses the Kullback-Leiber divergence (KLD) as a distance concept, and calculates the difference between two information entropies based on the Kullback-Leiber divergence (KLD). and how to average it.
제1 모델링부(23)는 제1 손실함수 및 제1 정규화함수에 기초하여 입력데이터(x1)를 기계학습하기 위한 제1 인공신경망함수(ANN(1))를 재생성할 수 있다.The
제1 출력부(24)는 제1 인공신경망함수(ANN(1))에 기초하여 입력데이터(x1)를 기계학습한 결과물인 관리서버출력지식(gs)을 복수의 디바이스(40)에 제공할 수 있다. 제1 출력부(24)에서 출력된 관리서버출력지식(gs)은 네트워크(30)를 통해서 복수의 디바이스(40)에 제공될 수 있다.The
복수의 디바이스(40)는 디바이스데이터저장부(41), 제2 입력부(42), 제2 모델링부(43), 및 제2 출력부(44)를 포함할 수 있다.The plurality of
디바이스데이터저장부(41)는 디바이스(40)의 디바이스개인데이터를 저장할 수 있다. 이때, 디바이스개인데이터란 디바이스(40)를 사용하는 사용자의 데이터에 관한 것으로써 이미지, 텍스트, 음성파일 등 다양한 형태에 해당할 수 있다. 복수의 디바이스(40)의 디바이스데이터저장부(41)에 저장된 디바이스개인데이터는 사용자의 데이터에 따라서 서로 다를 수 있다. The device
제2 입력부(42)는 프록시데이터를 다운로드할 수 있다. 제2 입력부(42)는 관리서버(20)에서 기계학습한 결과물인 관리서버출력지식(gs)을 입력 받을 수 있다.The
제2 모델링부(43)는 제2 입력부(42)에서 제공되는 관리서버출력지식(gs), 디바이스데이터저장부(41)에서 제공되는 디바이스개인데이터, 및 제2 모델링부(43)에 미리 저장된 제2 인공신경망함수(ANN(2(1), ..., ANN(2(n))에서 기계학습한 결과물인 디바이스출력지식(g1, ..., gn)에 기초하여 입력데이터(x2)를 기계학습하기 위한 제2 인공신경망함수(ANN(2(1), ..., ANN(2(n))를 재생성할 수 있다.The
구체적으로, 제2 모델링부(43)는 디바이스데이터저장부(41)에서 제공되는 디바이스개인데이터와 다운로드된 프록시데이터에 기초하여 제2 손실함수를 생성할 수 있다.Specifically, the
제2 모델링부(43)가 생성하는 제2 손실함수는 다운로드된 프록시데이터와 디바이스데이터저장부(41)에서 제공되는 디바이스개인데이터의 차이 값을 함수로 나타낸 것으로서, 제1 인공신경망함수와 동일한 지식증류법(KD, Knowledge Distillation)을 이용하여 생성될 수 있다.The second loss function generated by the
또한, 제2 모델링부(43)는 제2 인공신경망함수(ANN(2(1), ..., ANN(2(n))에서 기계학습한 결과물인 디바이스출력지식(g1, ..., gn), 업로드된 프록시데이터, 및 관리서버(20)에서 제공되는 관리서버출력지식(gs)에 기초하여 제2 정규화함수를 생성할 수 있다. In addition, the
제2 모델링부(43)가 생성하는 제2 정규화함수는 다운로드된 프록시데이터를 기준으로 제2 인공신경망함수(ANN(2(1), ..., ANN(2(n))에서 기계학습한 결과물인 디바이스출력지식(g1, ..., gn) 및 관리서버출력지식(gs 간의 정규화 값을 함수로 나타낸 것으로서, 제2 인공신경망함수(ANN(2(1), ..., ANN(2(n))와 동일한 쿨백-라이블러 발산(Kullback-Leiber divergence, KLD) 및 Jensen-Shannon divergence 발산 등을 이용하여 생성될 수 있다.The second normalization function generated by the
제2 모델링부(43)는 제2 손실함수 및 제2 정규화함수에 기초하여 입력데이터(x2)를 기계학습하기 위한 제2 인공신경망함수(ANN(2(1), ..., ANN(2(n))를 재생성할 수 있다. The
제2 출력부(44)는 제2 인공신경망함수(ANN(2(1), ..., ANN(2(n))에 기초하여 입력데이터(x2)를 기계학습한 결과물인 디바이스출력지식(g1, ..., gn)을 관리서버(20)에 제공할 수 있다. 제2 출력부(44)에서 출력된 디바이스출력지식(g1, ..., gn)은 네트워크(30)를 통해서 관리서버(20)에 제공될 수 있다.The
제어서버(50)는 미리 설정된 횟수를 초과하면 제1 인공신경망함수(ANN(1))에서 입력데이터(x1)를 기계학습하는 것을 중단할 수 있다. 제어서버(50)는 미리 설정된 횟수를 초과하면 제2 인공신경망함수(ANN(2(1), ..., ANN(2(n))에서 입력데이터(x2)를 기계학습하는 것을 중단할 수 있다.The
도 2는 본 발명의 한 실시예에 따른 관리서버 및 복수의 디바이스에서 인공신경망함수를 이용하여 머신러닝을 수행하는 과정을 설명하는 도면이다. 2 is a diagram illustrating a process of performing machine learning using an artificial neural network function in a management server and a plurality of devices according to an embodiment of the present invention.
관리서버(20)에 포함된 제1 모델링부(23)가 생성하는 제1 인공신경망함수(ANN(1))는 제1 입력층(IL(1)), 제1 은닉층(HL(1)), 및 제1 출력층(OL(1))로 구성될 수 있다. The first artificial neural network function (ANN(1)) generated by the
제1 입력층(IL(1))에는 입력데이터(x1)가 인가될 수 있다. 제1 입력층(IL(1))에 인가된 입력데이터(x1)가 제1 은닉층(HL(1))을 통과하면서 관리서버출력값데이터(zs)가 생성될 수 있다. 제1 은닉층(HL(1))을 통해서 생성된 관리서버출력값데이터(zs)는 제1 출력층(OL(1))으로 인가될 수 있다.Input data x1 may be applied to the first input layer IL( 1 ). Management server output value data zs may be generated while the input data x1 applied to the first input layer IL(1) passes through the first hidden layer HL(1). The management server output value data zs generated through the first hidden layer HL(1) may be applied to the first output layer OL(1).
제1 은닉층(HL(1))은 다양한 활성화함수(activation function)에 의해서 구성될 수 있으며, 제1 은닉층(HL(1))에 인가된 입력데이터(x1)는 다양한 활성화함수(activation function)에 의해서 가중치 값이 곱해지고 관리서버출력값데이터(zs)으로 변환될 수 있다. The first hidden layer HL(1) may be configured by various activation functions, and the input data x1 applied to the first hidden layer HL(1) is dependent on various activation functions. The weight value may be multiplied by and converted into management server output value data (zs).
이하, 활성화함수(activation function)에 의해서 입력데이터(x1)에 곱해지는 가중치 값을 특성값데이터(또는, es)라 명명한다. Hereinafter, the weight value multiplied by the input data x1 by an activation function is referred to as characteristic value data (or es).
이때, 관리서버출력값데이터(zs)는 활성화함수(activation function)에 의해서 입력데이터(x1)에 곱해지는 특성값데이터(es)에 대한 정보를 포함하고 있다. At this time, the management server output value data (zs) includes information about the characteristic value data (es) multiplied by the input data (x1) by an activation function.
제1 모델링부(23)의 제1 출력층(OL(1))에서 생성된 관리서버출력값데이터(zs)는 제1 출력부(24)로 인가될 수 있다. The management server output value data zs generated in the first output layer OL( 1 ) of the
이때, 제1 출력부(24)는 네트워크(30)를 통해서 특성값데이터(es)에 대한 정보를 포함하는 관리서버출력값데이터(zs)에 기초하여 관리서버출력지식(gs)을 복수의 디바이스(40)에 제공할 수 있다.At this time, the
복수의 디바이스(40)에 포함된 제2 모델링부(43)가 생성하는 제2 인공신경망함수(ANN(2(1)), ..., ANN(2(n)))는 제2 입력층(IL(2)), 제2 은닉층(HL(2)), 및 제2 출력층(OL(2))로 구성될 수 있다.The second artificial neural network function (ANN(2(1)), ..., ANN(2(n))) generated by the
제2 입력층(IL(2))에는 입력데이터(x2)가 인가될 수 있다. 제2 입력층(IL(2))에 인가된 입력데이터(x2)가 제2 은닉층(HL(2))을 통과하면서 디바이스출력값데이터(z1, ..., zn)가 생성될 수 있다. 제2 은닉층(HL(2))을 통해서 생성된 디바이스출력값데이터(z1, ..., zn)는 제2 출력층(OL(2))으로 인가될 수 있다.Input data x2 may be applied to the second input layer IL( 2 ). Device output value data z1, ..., zn may be generated while the input data x2 applied to the second input layer IL(2) passes through the second hidden layer HL(2). The device output value data z1, ..., zn generated through the second hidden layer HL(2) may be applied to the second output layer OL(2).
제2 은닉층(HL(2))은 다양한 활성화함수(activation function)에 의해서 구성될 수 있으며, 제2 은닉층(HL(2))에 인가된 입력데이터(x2)는 다양한 활성화함수(activation function)에 의해 가중치 값이 곱해지고 디바이스출력값데이터(z1, ..., zn)로 변환될 수 있다. The second hidden layer HL(2) may be configured by various activation functions, and the input data x2 applied to the second hidden layer HL(2) is dependent on various activation functions. The weight value may be multiplied by and converted into device output value data (z1, ..., zn).
이하, 복수의 디바이스(40)에서 활성화함수(activation function)에 의해서 입력데이터(x2)에 곱해지는 가중치 값 각각을 특성값데이터(또는, e1, ..., en)라 명명한다. Hereinafter, each of the weight values multiplied by the input data x2 by an activation function in the plurality of
이때, 디바이스출력값데이터(또는, z1, ..., zn)는 활성화함수(activation function)에 의해서 입력데이터(x2)에 곱해지는 가중치 값 각각인 특성값데이터(또는, e1, ..., en)에 대한 정보를 포함하고 있다. At this time, the device output value data (or z1, ..., zn) is the characteristic value data (or e1, ..., en ) contains information about
제2 모델링부(43)의 제2 출력층(OL(2))에서 생성되는 복수의 디바이스출력값데이터(z1, ..., zn)는 제2 출력부(44)로 인가될 수 있다. A plurality of device output value data (z1, ..., zn) generated in the second output layer (OL(2)) of the
이때, 제2 출력부(44)는 네트워크(30)를 통해서 특성값데이터(e1, ..., en)에 대한 정보를 포함하는 복수의 디바이스출력값데이터(z1, ..., zn)에 기초하여 복수의 디바이스출력지식(g1, ..., gn)을 관리서버(20)에 제공할 수 있다.At this time, the
도 1 및 도 2를 함께 참고하면, 관리서버(20)는 활성화함수(activation function)에 의해서 입력데이터(x1)에 곱해지는 가중치 값을 특성값데이터(또는, es) 또는 특성값데이터(es)에 대한 정보를 포함하는 디바이스출력값데이터(zs)에 기초하여 관리서버출력지식(gs)를 복수의 디바이스(40)에 제공하므로 통신데이터의 부하를 최소화시켜 통신속도를 증가시킬 수 있다.Referring to FIGS. 1 and 2 together, the
복수의 디바이스(40)는 활성화함수(activation function)에 의해서 입력데이터(x2)에 곱해지는 가중치 값을 특성값데이터(e1, ..., en) 또는 특성값데이터(e1, ..., en)에 대한 정보를 포함하는 복수의 디바이스출력값데이터(z1, ..., zn)에 기초하여 복수의 디바이스출력지식(g1, ..., gn)을 관리서버(20)에 제공하므로 통신데이터의 부하를 최소화시켜 통신속도를 증가시킬 수 있다. The plurality of
복수의 디바이스(40)의 제2 모델링부(43)에서 생성된 제2 인공지능신경망(ANN(2(1)), ..., ANN(2(n)))은 복수의 디바이스(40)의 디바이스데이터저장부(41)에서 제공되는 디바이스개인데이터 각각에 기초하여 생성될 수 있으나, 상술한 바와 같이 복수의 디바이스(40)는 관리서버(20)에 특성값데이터(e1, ..., en) 또는 디바이스출력값데이터(z1, ..., zn)만을 전송하므로 디바이스개인데이터는 전송되지 않는다. 이에, 복수의 디바이스(40)에서 사용자의 개인정보데이터는 관리서버(20)에 전송되지 않으므로 개인정보데이터의 유출을 방지할 수 있다.The second artificial intelligence neural network (ANN(2(1)), ..., ANN(2(n))) generated by the
복수의 디바이스(40)의 제2 모델링부(43)에서 생성된 제2 인공지능신경망(ANN(2(1)), ..., ANN(2(n)))은 관리서버(20)에서 제공되는 관리서버출력값데이터(zs)에 기초한 관리서버출력지식(gs)에 기반하여 생성되고 관리서버(20)의 제1 모델링부(23)에서 생성되는 제1 인공지능신경망(ANN(1))은 복수의 디바이스(40)에서 제공되는 디바이스출력값데이터(z1, ..., zn)에 기초한 복수의 디바이스출력지식(z1, ..., zn)에 기반하여 생성되므로 관리서버(20)에서 학습되는 학습데이터와 복수의 디바이스(40)에서 학습되는 학습데이터의 이질성이 감소될 수 있어 머신러닝학습의 기능이 증가될 수 있다.The second artificial intelligence neural network (ANN(2(1)), ..., ANN(2(n))) generated by the
도 3은 본 발명의 한 실시예에 따른 관리서버에서 머신러닝을 수행하는 과정을 설명하는 흐름도이다. 3 is a flowchart illustrating a process of performing machine learning in a management server according to an embodiment of the present invention.
단계(S10)에서 프록시데이터를 다운로드할 수 있다.Proxy data can be downloaded in step S10.
구체적으로, 관리서버(20)의 제1 입력부(22)는 프록시데이터를 다운로드 할 수 있다.Specifically, the
단계(S11)에서 복수의 디바이스에서 제공된 복수의 제n 디바이스출력지식을 입력받을 수 있다.In step S11, a plurality of n-th device output knowledge provided by a plurality of devices may be input.
구체적으로, 관리서버(20)의 제1 입력부(22)는 복수의 디바이스(40)에서 제2 인공신경망함수(ANN(2(1), ..., ANN(2(n))에 기초하여 생성된 복수의 제n 디바이스출력지식(g1, ..., gn)을 입력받을 수 있다.Specifically, the
단계(S12)에서 제1 인공신경망함수(ANN(1))를 통해 산출된 제n 관리서버출력지식(gs)과 프록시데이터를 이용하여 제1 손실함수를 생성할 수 있다.In step S12, a first loss function may be generated using the nth management server output knowledge (gs) calculated through the first artificial neural network function (ANN(1)) and proxy data.
구체적으로, 관리서버(20)의 제1 모델링부(23)는 제1 인공신경망함수(ANN(1))에서 기계학습한 결과물인 제n 관리서버출력지식(gs)과 업로드된 프록시데이터에 기초하여 제1 손실함수를 생성할 수 있다.Specifically, the
단계(S13)에서 제1 인공신경망함수(ANN(1))를 통해 산출된 제n 관리서버출력지식(gs), 복수의 제n 디바이스출력지식(g1, ..., gn), 및 프록시데이터를 이용하여 제1 정규화함수를 생성할 수 있다.n-th management server output knowledge (gs) calculated through the first artificial neural network function (ANN(1)) in step S13, a plurality of n-th device output knowledge (g1, ..., gn), and proxy data A first normalization function can be generated using
구체적으로, 관리서버(20)의 제1 모델링부(23)는 제1 인공신경망함수(ANN(1))에서 기계학습한 결과물인 제n 관리서버출력지식(gs), 단계(S11)에서 입력받은 복수의 제n 디바이스출력지식(g1, ..., gn), 및 프록시데이터를 이용하여 정규화함수를 생성할 수 있다.Specifically, the
단계(S14)에서 제1 손실함수 및 제1 정규화함수에 기초하여 제1 인공신경망함수를 재생성할 수 있다.In step S14, a first artificial neural network function may be regenerated based on the first loss function and the first normalization function.
구체적으로, 관리서버(20)의 제1 모델링부(23)는 단계(S12)에서 생성된 제1 손실함수 및 단계(S13)에서 생성된 제1 정규화함수에 기초하여 제1 인공신경망함수(ANN(1))를 재생성할 수 있다.Specifically, the
단계(S15)에서 재생성된 제1 인공신경망함수(ANN(1))에서 입력데이터를 기계학습하고 제n+1 관리서버출력지식(gs)을 생성할 수 있다. In step S15, the regenerated first artificial neural network function (ANN(1)) may perform machine learning on the input data and generate the n+1 management server output knowledge (gs).
구체적으로, 관리서버(20)의 제1 모델링부(23)의 제1 입력층(IL(1))에 입력데이터(x1)가 인가될 수 있고 제1 인공신경망함수(ANN(1))를 통해서 제n+1 관리서버출력지식(gs)이 생성될 수 있다.Specifically, the input data (x1) may be applied to the first input layer (IL(1)) of the
단계(S16)에서 제n+1 관리서버출력지식을 복수의 디바이스 각각에 제공할 수 있다.In step S16, the n+1th management server output knowledge may be provided to each of a plurality of devices.
구체적으로, 제어서버(50)는 미리 설정된 학습 횟수가 도달한 경우 제1 모델링부(23)에서 기계학습을 중단시키고, 제1 출력부(24)는 제n+1 관리서버출력지식(gs)을 복수의 디바이스(40)의 각각에 제공할 수 있다. Specifically, the
도 4는 본 발명의 한 실시예에 따른 복수의 디바이스에서 인공신경망함수를 이용하여 머신러닝을 수행하는 과정을 설명하는 흐름도이다.4 is a flowchart illustrating a process of performing machine learning using an artificial neural network function in a plurality of devices according to an embodiment of the present invention.
단계(S20)에서 프록시데이터를 다운로드할 수 있다.Proxy data can be downloaded in step S20.
구체적으로, 복수의 디바이스(40)의 제2 입력부(42)는 프록시데이터를 다운로드할 수 있다.Specifically, the
단계(S21)에서 관리서버에서 제공된 제n+1 관리서버출력지식을 입력받을 수 있다.In step S21, the n+1th management server output knowledge provided by the management server may be input.
구체적으로, 복수의 디바이스(40)의 제2 입력부(42)는 관리서버(20)에서 제공된 제n+1 관리서버출력지식(gs)을 입력받을 수 있다.Specifically, the
단계(S22)에서 복수의 디바이스에서 생성된 복수의 제n 디바이스개인데이터를 업로드할 수 있다.In step S22, a plurality of n-th device personal data generated by a plurality of devices may be uploaded.
구체적으로, 복수의 디바이스(40)는 디바이스데이터저장부(41)에 저장된 복수의 제n 디바이스개인데이터를 업로드 할 수 있다.Specifically, the plurality of
단계(S23)에서 제2 인공신경망함수를 통해 산출된 복수의 제n 디바이스출력지식 및 복수의 제n 디바이스개인데이터에 기초하여 복수의 제2 손실함수를 생성할 수 있다.In step S23, a plurality of second loss functions may be generated based on the plurality of n-th device output knowledge and the plurality of n-th device personal data calculated through the second artificial neural network function.
구체적으로, 복수의 디바이스(40)의 제2 모델링부(43)는 디바이스데이터저장부(41)에서 업로드된 제n 디바이스개인데이터 및 제2 인공신경망함수(ANN(2(1), ..., ANN(2(n))를 통해 산출된 복수의 제n 디바이스출력지식(g1, ..., gn)에 기초하여 복수의 제2 손실함수를 생성할 수 있다.Specifically, the
예를 들어, 어느 하나의 디바이스(40)의 제2 모델링부(43)는 어느 하나의 디바이스(40)의 디바이스데이터저장부(41)에서 업로드된 제n 디바이스개인데이터 및 어느 하나의 디바이스(40)의 제2 인공신경망함수를 통해 산출된 제n 디바이스출력지식에 기초하여 제2 손실함수를 생성할 수 있다.For example, the
또는, 다른 하나의 디바이스(40)의 제2 모델링부(43)는 다른 하나의 디바이스(40)의 디바이스데이터저장부(41)에서 업로드된 제n 디바이스개인데이터 및 다른 하나의 디바이스(40)의 제2 인공신경망함수를 통해 산출된 제n 디바이스출력지식에 기초하여 제2 손실함수를 생성할 수 있다.Alternatively, the
단계(S25)에서 제2 인공신경망함수를 통해 산출된 복수의 제n 디바이스출력지식, 관리서버에서 제공된 제n+1 관리서버출력지식, 및 프록시데이터를 이용하여 복수의 제2 정규화함수를 생성할 수 있다.In step S25, a plurality of second normalization functions are generated by using the n-th device output knowledge calculated through the second artificial neural network function, the n+1-th management server output knowledge provided by the management server, and proxy data. can
구체적으로, 복수의 디바이스(40)의 제2 모델링부(43)는 제2 인공신경망함수(ANN(2(1), ..., ANN(2(n))에서 기계학습한 결과물인 제n 디바이스출력지식(g1, ..., gn), 관리서버(20)에서 제공된 제n+1 관리서버출력지식(gs), 및 프록시데이터를 이용하여 정규화함수를 생성할 수 있다.Specifically, the
단계(S26)에서 제2 손실함수 및 제2 정규화함수에 기초하여 제2 인공신경망함수를 재생성할 수 있다.In step S26, a second artificial neural network function may be regenerated based on the second loss function and the second normalization function.
구체적으로, 복수의 디바이스(40)의 제2 모델링부(43)는 단계(24)에서 생성된 제2 손실함수 및 단계(S27)에서 생성된 제2 정규화함수에 기초하여 제2 인공신경망함수(ANN(2(1), ..., ANN(2(n))를 재생성할 수 있다.Specifically, the
단계(S27)에서 재생성된 제2 인공신경망함수에서 입력데이터를 기계학습하고 복수의 제n+1 디바이스출력지식을 생성할 수 있다.In the second artificial neural network function regenerated in step S27, input data may be machine-learned and a plurality of n+1 device output knowledge may be generated.
구체적으로, 복수의 디바이스(40)의 제2 모델링부(43)의 제2 입력층(IL2(1), ..., IL2(n))에 입력데이터(x2)가 인가될 수 있고, 제2 인공신경망함수(ANN(2(1), ..., ANN(2(n))를 통해서 복수의 제n+1 디바이스출력지식(g1, ..., gn)이 생성될 수 있다. Specifically, the input data x2 may be applied to the second input layer IL2(1), ..., IL2(n) of the
단계(S28)에서 복수의 제n+1 디바이스출력지식과 제n+1 관리서버출력지식을 비교할 수 있다.In step S28, a plurality of n+1th device output knowledge and n+1th management server output knowledge may be compared.
구체적으로, 제어서버(50)는 미리 설정된 학습 횟수에 도달한 경우 제2 모델링부(43)에서 기계학습을 중단시키고 제2 출력부(44)는 제n+1 관리서버출력지식을 관리서버(20)에 제공할 수 있다. Specifically, the
지금까지 참조한 도면과 기재된 발명의 상세한 설명은 단지 본 발명의 예시적인 것으로서, 이는 단지 본 발명을 설명하기 위한 목적에서 사용된 것이지 의미 한정이나 특허청구범위에 기재된 본 발명의 범위를 제한하기 위하여 사용된 것은 아니다. 그러므로 본 기술 분야의 통상의 지식을 가진 자라면 이로부터 다양한 변형 및 균등한 타 실시 예가 가능하다는 점을 이해할 것이다. 따라서, 본 발명의 진정한 기술적 보호 범위는 첨부된 특허청구범위의 기술적 사상에 의해 정해져야 할 것이다.The drawings and detailed description of the present invention referred to so far are only examples of the present invention, which are only used for the purpose of explaining the present invention, and are used to limit the scope of the present invention described in the meaning or claims. It is not. Therefore, those skilled in the art will understand that various modifications and equivalent other embodiments are possible therefrom. Therefore, the true technical protection scope of the present invention should be determined by the technical spirit of the appended claims.
Claims (11)
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020210186279A KR20230096617A (en) | 2021-12-23 | 2021-12-23 | Method for cross-device knowledge transfer using proxy dataset in federated learning and system |
KR10-2021-0186279 | 2021-12-23 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023120776A1 true WO2023120776A1 (en) | 2023-06-29 |
Family
ID=86903063
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2021/019745 WO2023120776A1 (en) | 2021-12-23 | 2021-12-23 | Device-to-device knowledge transmission method using proxy dataset in federated learning, and system therefor |
Country Status (2)
Country | Link |
---|---|
KR (1) | KR20230096617A (en) |
WO (1) | WO2023120776A1 (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20190032433A (en) * | 2016-07-18 | 2019-03-27 | 난토믹스, 엘엘씨 | Distributed machine learning systems, apparatus, and methods |
JP2019144642A (en) * | 2018-02-16 | 2019-08-29 | 日本電信電話株式会社 | Distributed deep learning system |
KR20210051604A (en) * | 2019-10-31 | 2021-05-10 | 에스케이텔레콤 주식회사 | Distributed Deep Learning System and Its Operation Method |
CN113691594A (en) * | 2021-08-11 | 2021-11-23 | 杭州电子科技大学 | A method to solve the data imbalance problem in federated learning based on the second derivative |
US20210365841A1 (en) * | 2020-05-22 | 2021-11-25 | Kiarash SHALOUDEGI | Methods and apparatuses for federated learning |
-
2021
- 2021-12-23 WO PCT/KR2021/019745 patent/WO2023120776A1/en active Application Filing
- 2021-12-23 KR KR1020210186279A patent/KR20230096617A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20190032433A (en) * | 2016-07-18 | 2019-03-27 | 난토믹스, 엘엘씨 | Distributed machine learning systems, apparatus, and methods |
JP2019144642A (en) * | 2018-02-16 | 2019-08-29 | 日本電信電話株式会社 | Distributed deep learning system |
KR20210051604A (en) * | 2019-10-31 | 2021-05-10 | 에스케이텔레콤 주식회사 | Distributed Deep Learning System and Its Operation Method |
US20210365841A1 (en) * | 2020-05-22 | 2021-11-25 | Kiarash SHALOUDEGI | Methods and apparatuses for federated learning |
CN113691594A (en) * | 2021-08-11 | 2021-11-23 | 杭州电子科技大学 | A method to solve the data imbalance problem in federated learning based on the second derivative |
Also Published As
Publication number | Publication date |
---|---|
KR20230096617A (en) | 2023-06-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3461290A1 (en) | Learning model for salient facial region detection | |
WO2018155963A1 (en) | Method of accelerating execution of machine learning based application tasks in a computing device | |
WO2019039757A1 (en) | Method and device for generating training data and computer program stored in computer-readable recording medium | |
WO2022114438A1 (en) | Internet of things-based remote controllable interactive board system using blockchain | |
WO2023171973A1 (en) | Apparatus and method of managing non-fungible tokens based on blockchain | |
WO2022059826A1 (en) | Digital twin environment-based convergence-type smart-iot connected middleware device, and method for providing same | |
WO2015111836A1 (en) | Method for sharing content among plurality of terminals, and system and device for same | |
WO2023120776A1 (en) | Device-to-device knowledge transmission method using proxy dataset in federated learning, and system therefor | |
WO2023128093A1 (en) | User learning environment-based reinforcement learning apparatus and method in semiconductor design | |
WO2014157924A1 (en) | Method for sharing contents | |
WO2021145635A1 (en) | Ai-based system for training and sharing image style | |
WO2019198900A1 (en) | Electronic apparatus and control method thereof | |
WO2017117715A1 (en) | Attendance method and system for monitoring system | |
WO2025042099A1 (en) | Electronic device for providing physical property prediction data of composite according to composition ratio by using artificial intelligence, and operating method thereof | |
WO2022220523A1 (en) | Distributed storage method and apparatus for managing accessible information by using blockchain-based enterprise network | |
WO2014092292A1 (en) | Method and system for cloud- and streaming-based data transfer, client terminal and service apparatus | |
WO2023171908A1 (en) | Data management system and method for managing data collected from factory data collecting apparatus through web-based data management interface | |
WO2021145645A1 (en) | Style conversion external linkage system, and style conversion external linkage server | |
WO2023204414A1 (en) | System and method for managing access to asset information of smart factory | |
WO2021080110A1 (en) | System and method for managing and identifying affiliation of terminal in cloud environment | |
WO2017213321A1 (en) | Method and system for protecting sharing information | |
WO2022234878A1 (en) | Transition strategy search method and operating device, using user state vectors | |
WO2022124467A1 (en) | Container management apparatus and management method for process migration in orchestrator environment | |
WO2015046795A1 (en) | Data storage method of application and portable terminal executing same | |
WO2022102912A1 (en) | Neuromorphic architecture dynamic selection method for modeling on basis of snn model parameter, and recording medium and device for performing same |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21969140 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21969140 Country of ref document: EP Kind code of ref document: A1 |