Disclosure of Invention
The application provides a joint learning method and joint learning equipment based on a block chain network, according to the method, the training result of the joint learning is fed back through the state channel, and the requirement of high TPS (interactive application) of the joint learning and the blockchain technology can be met.
The first aspect provides a joint learning method based on a blockchain network, which comprises the steps of obtaining initial model parameters, training the initial model parameters to generate training results for updating the initial model parameters, and sending the training results to an application server node through a state channel, wherein the initial model parameters are parameters used for establishing an initial model by equipment participating in joint learning, and the state channel is an off-link channel of the equipment and the application server node, and the off-link channel is located outside the blockchain network.
According to the joint learning method provided by the application, the equipment can send the training results participating in joint learning through the state channel, namely, the state channel can be established between the application server node and the equipment participating in joint learning, so that the training results obtained after the equipment participates in the joint learning and performs local training are not required to be shared at a plurality of nodes of the blockchain network, and the requirement of high TPS (application layer) of the joint learning and the blockchain technology can be met. In addition, the training result is sent through the state channel, so that the problem that the training result is shared among all nodes of the block chain network can be avoided, and the data privacy of the training result is ensured.
It should be appreciated that the status channel (STATE CHANNEL) is an "off-chain" technique for performing transactions and other status updates, i.e., a status channel may refer to an under-chain channel of a blockchain, through which data or information sent by the under-chain channel need not be shared among the various nodes of the blockchain, and an under-chain channel may be an off-chain channel established between two nodes.
For example, the status channel may be a unidirectional downlink payment channel, for example, a unidirectional downlink payment channel may be established between the application server node and the device participating in the joint learning, where the unidirectional downlink payment channel is used for the application server node to pay digital currency to the terminal device after receiving the training result sent by the device and participating in the joint learning.
For example, the state channel may be a bidirectional link-down channel, for example, a bidirectional link-down channel established between the application server node and the device participating in the joint learning, the device may participate in a local training result of the joint learning to the application server node through the bidirectional link-down channel, and the application server node may pay digital money to the device after receiving the training result sent by the device.
It should be noted that, the initial model may be a machine learning model or may also be a computer algorithm model, for example, the initial model may be a neural network model used in machine learning, and when the initial model is a neural network model, the initial parameters may be weight values used to build the neural network model. When the initial model is a computer algorithm model, the initial parameters may be parameters required to build the algorithm model. The foregoing is illustrative of the present application and is not to be construed as limiting thereof.
With reference to the first aspect, in certain implementation manners of the first aspect, the method further includes receiving channel address information sent by the application server node, where the channel address information is used to identify the status channel.
With reference to the first aspect, in certain implementation manners of the first aspect, the method further includes obtaining a joint learning smart contract, where training logic instructions including the initial model parameters are included in the joint learning smart contract;
The training of the initial model parameters generates training results for updating the initial model parameters, including running the joint learning intelligent contract to train the initial model parameters, and generating the training results.
In one possible implementation manner, the intelligent contract for joint learning may include data information required for joint learning, content requirements of training results fed back by the device to the application server, and corresponding fee information of the training results fed back by the device.
With reference to the first aspect, in certain implementation manners of the first aspect, the acquiring the joint learning smart contract includes acquiring the joint learning smart contract through the blockchain network, wherein the joint learning smart contract is about an under-chain smart contract, and an execution environment of the under-chain smart contract is not an intelligent contract of the blockchain network.
In the present application, the joint learning smart contract may be a smart contract by deploying an under-chain smart contract to the blockchain network, wherein the under-chain smart contract refers to a smart contract whose execution environment does not belong to the blockchain network. Deploying the under-chain smart contracts to the blockchain network may ensure the trustworthiness of the joint smart contracts.
With reference to the first aspect, in certain implementations of the first aspect, the running the joint learning smart contract includes running the joint learning smart contract in a joint learning application.
For example, the devices participating in the joint learning may install a joint learning application, the application server node may issue an address that may be associated with the joint learning smart contract to the joint learning application, and the devices participating in the joint learning may download and run the joint learning smart contract in the joint learning application.
With reference to the first aspect, in certain implementations of the first aspect, the running the joint learning smart contract in a joint learning application to train the initial model parameters includes running the joint learning smart contract under a trusted execution environment TEE to train the initial model parameters.
In the application, the device can run the joint learning intelligent contract under the trusted execution environment, thereby ensuring the safety of running the joint intelligent contract.
For example, devices based on system on chip (SoC) hardware may provide a three-tier hardware security architecture including a rich runtime environment (rich execution environment, REE), a trusted runtime environment (trusted execution environment, TEE), and a secure runtime environment (secure execution environment, SEE), where the TEE may run security sensitive programs and save security sensitive data.
For example, the joint learning smart contract may be run in a joint learning application under a trusted execution environment TEE to train the initial model parameters.
With reference to the first aspect, in certain implementation manners of the first aspect, after the training result is sent to an application server node through a status channel, the method further includes receiving digital currency paid by the application server node through the status channel.
With reference to the first aspect, in certain implementation manners of the first aspect, the method further includes sending a first message to the application server node, where the first message is used to indicate that the device participates in joint learning, and the first message includes a digital money wallet address of the device.
In the application, the device can send a message to the application server node to indicate the participation in the joint learning and send the information of the digital currency wallet address so as to obtain the corresponding digital currency after sending the training result to the application server node.
It should be appreciated that a digital money purse address may be used to identify a device. When a plurality of devices participate in the joint learning, the application server node distinguishes different devices according to the digital currency wallet addresses of the plurality of devices, so that after receiving training results of the joint learning sent by the plurality of devices, the application server node respectively pays digital currency to the plurality of devices.
With reference to the first aspect, in certain implementation manners of the first aspect, the receiving the digital currency paid by the application server node through the status channel includes receiving the digital currency paid by the application server node through the status channel under a trusted execution environment TEE.
With reference to the first aspect, in certain implementation manners of the first aspect, the method further includes determining that joint learning is finished, and sending a first transaction to the blockchain network, wherein the first transaction is used for indicating to close the state channel.
It will be appreciated that the device may receive a number of transactions sent by the application server node, and therefore the digital currency of the device in the status channel is continually incremented, requiring only up-to-date transactions to be maintained. When the device decides to close the status channel, it can sign the latest transaction, send it to the blockchain network, and take out the digital currency belonging to the device.
In a second aspect, a joint learning method based on a blockchain network is provided, and the joint learning method comprises the steps of sending initial model parameters to equipment, wherein the initial model parameters are parameters used for establishing an initial model participating in joint learning by the equipment, and receiving training results for updating the initial model parameters through a state channel, wherein the state channel is an off-link channel between the equipment and an application server node, and the off-link channel is located outside the blockchain network.
According to the joint learning method based on the blockchain network, the application server node can receive the training results sent by the equipment participating in the joint learning through the state channel, namely, the state channel can be established between the application server node and the equipment participating in the joint learning, so that the training results obtained after the equipment participates in the local training of the joint learning are not required to be shared at a plurality of nodes of the blockchain network, and the requirement of high TPS (secure application layer) combining the joint learning and the blockchain technology can be met.
It should be appreciated that the status channel (STATE CHANNEL) is an "off-chain" technique for performing transactions and other status updates, i.e., a status channel may refer to an under-chain channel of a blockchain, through which data or information sent by the under-chain channel need not be shared among the various nodes of the blockchain, and an under-chain channel may be an off-chain channel established between two nodes.
For example, the status channel may be a unidirectional downlink payment channel, for example, a unidirectional downlink payment channel may be established between the application server node and the device participating in the joint learning, where the unidirectional downlink payment channel is used for the application server node to pay digital currency to the device participating in the joint learning after receiving the training result sent by the terminal device and participating in the joint learning.
For example, the state channel may be a bidirectional link-down channel, for example, a bidirectional link-down channel established between the application server node and the device participating in the joint learning, the device may participate in a local training result of the joint learning to the application server node through the bidirectional link-down channel, and the application server node may pay digital money to the device after receiving the training result sent by the device.
With reference to the second aspect, in certain implementation manners of the second aspect, the method further includes receiving channel address information sent by a blockchain network, where the channel address information is used to identify the status channel, and sending the channel address information to the device.
With reference to the second aspect, in certain implementations of the second aspect, before the receiving the channel address information sent by the blockchain network, the method further includes sending a second transaction to the blockchain network, the second transaction being used to deploy the status channel.
In the application, the application server node can send the trade to be deployed in the link-down channel between the devices participating in the joint learning, thereby receiving the training result of updating the initial model parameters sent by the devices through the link-down channel. The TPS requirement can be met, and meanwhile, the data privacy of the training result can be protected.
With reference to the second aspect, in certain implementation manners of the second aspect, the method further includes sending a third transaction to the blockchain network, where the third transaction is used to deploy a joint learning smart contract to the blockchain network, the joint learning smart contract is an under-chain smart contract, an execution environment of the under-chain smart contract does not belong to the blockchain network, and training logic instructions of the joint learning smart contract include the initial model parameters.
In the application, the application server node can deploy an intelligent contract under a chain to the intelligent contract of the blockchain network, wherein the intelligent contract under the chain refers to the intelligent contract of which the execution environment does not belong to the blockchain network. Deploying the under-chain smart contracts to the blockchain network may ensure the trustworthiness of the joint smart contracts.
With reference to the second aspect, in certain implementations of the second aspect, after the training result is received through the status channel, the method further includes paying digital currency to the device through the status channel.
With reference to the second aspect, in certain implementation manners of the second aspect, before the payment of the digital currency to the device through the status channel, the method further includes receiving a first message sent by the device, where the first message is used to instruct the device to participate in joint learning, and the first message includes a digital currency wallet address of the device.
It should be appreciated that a digital money purse address may be used to identify a device. When a plurality of devices participate in the joint learning, the application server node distinguishes different devices according to the digital currency wallet addresses of the plurality of devices, so that after receiving training results of the joint learning sent by the plurality of devices, the application server node respectively pays digital currency to the plurality of devices.
In a third aspect, a joint learning device is provided, which includes a receiving unit, a processing unit and a sending unit, wherein the receiving unit is used for receiving initial parameters sent by an application server node, the initial model parameters are parameters used for establishing an initial model by devices participating in joint learning, the processing unit is used for training the initial parameters and generating a training result for updating the initial parameters, the sending unit is used for sending the training result to the application server node through a state channel, and the state channel is an off-link channel between the device and the application server node, and the off-link channel is located outside a blockchain network.
With reference to the third aspect, in some implementations of the third aspect, the receiving unit is further configured to receive channel address information sent by the application server node, where the channel address information is used to identify the status channel.
With reference to the third aspect, in some implementations of the third aspect, the receiving unit is further configured to obtain a joint learning smart contract, where the joint learning smart contract includes training logic instructions of the initial model parameters, and the processing unit is specifically configured to run the joint learning smart contract to train the initial model parameters, and generate a training result for updating the initial model parameters.
With reference to the third aspect, in some implementations of the third aspect, the receiving unit is specifically configured to obtain the joint learning smart contract through the blockchain network, where the joint learning smart contract is an under-chain smart contract, and an execution environment of the under-chain smart contract does not belong to the blockchain network.
With reference to the third aspect, in certain implementations of the third aspect, the processing unit is specifically configured to run the joint learning smart contract in a joint learning application.
With reference to the third aspect, in certain implementations of the third aspect, the processing unit is specifically configured to run the joint learning smart contract under a trusted execution environment TEE to train the initial model parameters.
With reference to the third aspect, in certain implementations of the third aspect, the receiving unit is further configured to receive digital currency paid by the application server node through the status channel.
With reference to the third aspect, in some implementations of the third aspect, the sending unit is further configured to send a first message to the application server node, where the first message is used to instruct the device to participate in joint learning, and the first message includes a digital money wallet address of the device.
With reference to the third aspect, in some implementations of the third aspect, the receiving unit is specifically configured to receive, under a trusted execution environment TEE, digital currency paid by the application server node through the status channel.
With reference to the third aspect, in some implementations of the third aspect, the processing unit is further configured to determine that the training task of joint learning is ended, and the sending unit is further configured to send a first transaction to the blockchain network, where the first transaction is used to instruct to close the status channel.
In a fourth aspect, a joint learning device is provided, which includes a sending unit, configured to send initial model parameters to a device, where the initial model parameters are parameters used by the device to establish an initial model that participates in joint learning, and a receiving unit, configured to receive a training result used to update the initial model parameters through a state channel, where the state channel is an off-link channel between the device and an application server node, and the off-link channel is located outside a blockchain network.
With reference to the fourth aspect, in some implementations of the fourth aspect, the receiving unit is further configured to receive channel address information sent by the blockchain network, where the channel address information is used to identify the status channel, and the sending unit is further configured to send the channel address information to the device.
With reference to the fourth aspect, in some implementations of the fourth aspect, the sending unit is further configured to send a second transaction to the blockchain network, where the second transaction is used to deploy the status channel.
With reference to the fourth aspect, in some implementations of the fourth aspect, the sending unit is further configured to send a third transaction to the blockchain network, where the third transaction is used to deploy a joint learning smart contract to the blockchain network, the joint learning smart contract is an under-chain smart contract, an execution environment of the under-chain smart contract does not belong to the blockchain network, and training logic instructions of the initial model parameters are included in the joint learning smart contract.
With reference to the fourth aspect, in certain implementations of the fourth aspect, the sending unit is further configured to pay digital currency to the device through the status channel.
With reference to the fourth aspect, in some implementation manners of the fourth aspect, the receiving unit is further configured to receive a first message sent by the device, where the first message is used to instruct the terminal device to participate in joint learning, and the first message includes information of a digital currency wallet address of the terminal device.
In a fifth aspect, there is provided a joint learning device comprising a processor, a memory for storing a computer program, the processor being adapted to invoke and run the computer program from the memory such that the joint learning device performs the joint learning method of the first aspect and its various possible implementations.
For example, the joint learning device may be a terminal device.
Optionally, the processor is one or more, and the memory is one or more.
Alternatively, the memory may be integrated with the processor or the memory may be separate from the processor.
In a sixth aspect, there is provided a joint learning device comprising a processor, a memory for storing a computer program, the processor being adapted to invoke and run the computer program from the memory, such that the joint learning device performs the joint learning method of the second aspect and its various possible implementations.
For example, the joint learning device may be an application server node.
Optionally, the processor is one or more, and the memory is one or more.
Alternatively, the memory may be integrated with the processor or the memory may be separate from the processor.
In a seventh aspect, there is provided a computer program product comprising a computer program (which may also be referred to as code, or instructions) which when run causes a computer or any of at least one of the processors to perform the method of the first aspect and its various implementations.
In an eighth aspect, a computer program product is provided, comprising a computer program (which may also be referred to as code, or instructions) which, when executed, causes a computer or any of the at least one processors to perform the method of the second aspect and its various implementations.
In a ninth aspect, there is provided a computer readable medium storing a computer program (which may also be referred to as code, or instructions) which, when run on a computer or any one of the at least one processors, causes the computer or the processor to perform the method of the first aspect and its various implementations.
In a tenth aspect, a computer readable medium is provided, which stores a computer program (which may also be referred to as code, or instructions) which, when run on a computer or any of the at least one processors, causes the computer or the processor to perform the method of the second aspect and its various implementations described above.
In an eleventh aspect, a chip system is provided, the chip system comprising a processor for supporting a server in a computer to implement the functions referred to in the first aspect and its various implementations.
In a twelfth aspect, a chip system is provided, the chip system including a processor for supporting a server in a computer to implement the functions involved in the second aspect and its various implementations.
Detailed Description
The technical scheme of the application will be described below with reference to the accompanying drawings.
First, the block chain technology, intelligent contracts and joint learning concepts involved in the embodiments of the present application will be briefly described.
1. Intelligent contract
An intelligent contract is a collection of code and data, which may also be referred to as a "programmable contract". Generally, a smart contract is defined by program codes and preset operating conditions, and performs actions when the operating conditions are triggered. The "intelligence" is the intelligence on execution, that is, if a certain preset condition is reached, the contract is automatically run.
2. Joint learning
As shown in fig. 1, a schematic flow chart of joint learning is shown. Including steps 110 through 140.
110. The model requirements send initial parameters to one or more devices participating in the joint learning, the initial parameters being used to provide the one or more devices with an initial model that participates in the joint learning.
It should be appreciated that the initial model may be a machine learning model or may also be a computer algorithm model, for example, the initial model may be a neural network model employed in machine learning, and when the initial model is a neural network model, the initial parameters may be weight values used to build the neural network model. When the initial model is a computer algorithm model, the initial parameters may be parameters required to build the algorithm model. The foregoing is illustrative of the present application and is not to be construed as limiting thereof.
It should be appreciated that the model demander may be an application server node, or may be an application APP in an application server node. The device may be a terminal device, for example, a user device, a mobile device, a user terminal, a wireless communication device or a user equipment. The foregoing is illustrative of the present application and is not to be construed as limiting thereof.
120. And the equipment participating in the joint learning performs local training according to the acquired initial model parameters to obtain a training result of updating the initial model parameters.
130. The equipment participating in the joint learning feeds back training results to the model demander.
It should be noted that, the training result may be to learn according to the privacy data of the user using the device to improve the initial model, and then compress the changed portion of the initial model into a small update package for feedback.
For example, when the initial model is a neural network model, the changed portion of the initial model may be a changed portion of the network weight value of the initial model.
140. The model demander adjusts model parameters according to training results fed back by one or more devices participating in the joint learning.
In other words, in a joint learning scenario, a device participating in joint learning may download a current latest initial model, train the latest initial model locally to improve this initial model, and then compress the changed portion of the initial model into a small update package. And transmitting the updated part of the model to a model demander by using an encryption communication method, wherein the model demander averages the received updated part of the model fed back by the equipment participating in the joint learning so as to improve the initial model. Therefore, all training data are on the equipment participating in the joint learning, and the personal privacy data of the user used for carrying out the local training in the equipment are not sent to the model demander, and only the changed part of the model is sent.
3. Off-chain payment channel technology
The payment channel under the blockchain is one of the technical routes of under-chain capacity expansion, namely a general technical scheme for updating part of transaction related states outside the blockchain. The core idea is that a certain transaction related party establishes an under-link communication channel, so that both parties can execute interactive actions in the under-link channel, during which state updating is not submitted to a main chain miner, and when the state channel needs to be closed, the final state is submitted to the main chain through a closed channel transaction (Close Channel Transaction) and is synchronized to the main chain account book. Since the intermediate process does not interact with the backbone, the state channel has very high execution efficiency.
4. Blockchain techniques
The blockchain technology realizes a chained data structure formed by sequentially connecting data and information blocks according to time sequence, and the distributed storage is not tamperable and not counterfeitable and ensured in a cryptography mode. Data and information in a blockchain is generally referred to as "transactions".
Blockchain technology is not a single technology, but is a system that integrates applications of point-to-point transmission, consensus mechanisms, distributed data storage, and cryptographic principles, with the technical characteristics of full disclosure and tamper resistance.
First, the point-to-point transmission, wherein nodes participating in the blockchain are independent and peer-to-peer, and the synchronization of data and information is realized between the nodes through a point-to-point transmission technology. The nodes can be different physical machines and also can be cloud-end different examples.
And the second and consensus mechanism is that the consensus mechanism of the block chain refers to a process that nodes participated in by multiple parties agree on specific data and information through interaction among the nodes under a preset logic rule. The consensus mechanism needs to rely on well designed algorithms, so there is a certain difference in the performance of different consensus mechanisms (e.g. TPS), time delay to reach consensus, consumed computational resources, consumed transmission resources, etc.
And thirdly, the distributed data storage in the blockchain is that the nodes participating in the blockchain respectively store independent and complete data, so that the data storage is fully disclosed among the nodes. Unlike conventional distributed data storage, which performs backup or synchronous storage by dividing data into multiple parts according to a certain rule, the blockchain distributed data storage relies on the common knowledge among nodes with peer-to-peer positions in the blockchain to realize high-consistency data storage.
Fourth, the cryptographic principle is that blockchains are typically based on asymmetric encryption techniques to enable trusted information dissemination, verification, etc.
The concept of "block" is to organize one or more data records in the form of "blocks", where the size of the "block" can be customized according to the actual application scenario, and "chain" is a data structure that connects the "blocks" storing the data records in time sequence and by a HASH (HASH) technique. In the blockchain, each 'block' comprises two parts of a 'block header' and a 'block body', wherein the 'block body' comprises transaction records packed into a 'block', and the 'block header' comprises the root HASH of all transactions in the 'block' and the HASH of the previous 'block'. The data structure of the blockchain ensures that the data stored on the blockchain is tamper-proof.
In the combined scheme of the prior art blockchain and joint learning, the training result of each training of the device is sent to nodes in the blockchain network for processing. That is, the training results of each device participating in the joint learning for local training need to be shared at multiple nodes in the blockchain. Each training is uplink, when the device is tens of millions of mobile phones, the device has high requirement on transaction throughput (transactions per second, TPS), and the existing blockchain architecture cannot meet the requirement of high TPS.
In view of this, the application provides a method for combining blockchain and joint learning, which can send training results of equipment participating in joint learning to a model demand party through a state channel, namely, the state channel can be respectively established between an application server node and the equipment participating in joint learning, so that the training results obtained after the equipment participates in joint learning to perform local training are not required to be shared at a plurality of nodes of a blockchain network, and the requirement of high TPS (secure application layer) combining the joint learning and the blockchain technology can be met.
In the present application, the device may be a terminal device, for example, a user equipment, a mobile device, a user terminal, a wireless communication device or a user equipment. The terminal device may also be a cellular telephone, a cordless telephone, a session initiation protocol (session initiation protocol, SIP) phone, a wireless local loop (wireless local loop, WLL) station, a personal digital assistant (personal DIGITAL ASSISTANT, PDA), a handheld device with wireless communication capabilities, a computing device or other processing device connected to a wireless modem, an in-vehicle device, a wearable device, a terminal device in a future 5G network or a terminal device in a future evolved public land mobile network (public land mobile network, PLMN), etc., as embodiments of the present application are not limited in this respect.
Referring now to FIG. 2, a block chain and joint learning method 200 according to an embodiment of the present application is described, and FIG. 2 shows a schematic flow chart of a block chain and joint learning method 200 according to an embodiment of the present application, which includes steps 210 through 230.
Step 210, the device receives initial model parameters sent by the application server node, where the initial model parameters are parameters used by devices participating in joint learning to build an initial model.
It should be noted that, the initial model may be a machine learning model or may also be a computer algorithm model, for example, the initial model may be a neural network model used in machine learning, and when the initial model is a neural network model, the initial parameters may be weight values used to build the neural network model. When the initial model is a computer algorithm model, the initial parameters may be parameters required to build the algorithm model. The foregoing is illustrative of the present application and is not to be construed as limiting thereof.
It should be appreciated that in the present application, the application server node may be a model demander as shown in fig. 1, for example, the application server node may be a server, a device for providing a computing service. Or the application server node may also be an application APP in the application server node.
It should be appreciated that in embodiments of the present application, the device and application server nodes may be light nodes, i.e., nodes that own transaction data associated with themselves. For a blockchain network, if a full node, i.e. a node having all transaction data of the full network, is considered as a node in the blockchain network, the device and application server nodes do not belong to the blockchain network, and if both the full node and the light node are considered as nodes in the blockchain network, the device and application server nodes belong to the blockchain network.
Step 220, training the initial model parameters by the equipment, and generating a training result for updating the initial model parameters.
The training result may be data information generated by the device learning based on the privacy data to improve the initial model and then compressing the changed portion of the initial model.
For example, when the initial model is a neural network model, the training result may be a change part of a neural network weight value, that is, assuming that the initial parameter is a weight value W1 of the neural network model, the device participating in the joint learning may train the neural network locally, and the training result may be a difference part between W1 and W2 when the weight value of the neural network model obtained by training is W2.
In one example, a device participating in the joint learning may obtain a joint learning smart contract, the joint learning smart contract may include training logic instructions for the initial model parameters, and the device may train the initial model parameters according to the obtained joint learning smart contract to generate a training result. The structural design of the joint learning intelligent contract can be shown in table 1.
TABLE 1
As shown in table 1, the joint learning intelligent contract may include data information required for joint learning, content requirements of training results fed back to the application server node by the devices participating in the joint learning, and corresponding fee information of the training results fed back by the devices participating in the joint learning.
It should be appreciated that the initial model may be a machine learning model, such as the initial model may be a neural network model employed in machine learning, and when the initial model is a neural network model, the initial parameters may be weight values for building the neural network model, and the training logic instructions of the initial model included in the smart contract may be to perform a gradient descent algorithm.
It should be appreciated that the initial model may be a computer algorithm model, for example, when the initial model is a computer algorithm model, the initial parameters may be parameters required to build the algorithm model, and the training logic instructions of the initial model included in the smart contract may be algorithms required for the computer to perform one model calculation. The foregoing is illustrative of the present application and is not to be construed as limiting thereof. In an embodiment of the present application, the joint learning smart contract may be a smart contract by deploying an under-chain smart contract to the blockchain network, wherein an under-chain smart contract refers to a smart contract whose execution environment does not belong to the blockchain network.
For example, the application server node may obtain a smart contract after redesigning the structure of the off-chain smart contract, where the redesigned off-chain smart contract may meet the requirement of joint learning, i.e., the smart contract may be used as a carrier of a joint learning program, e.g., the joint learning may be a piece of code encapsulated in the smart contract, and then the redesigned off-chain smart contract may be regarded as a joint learning smart contract. The application server node may send a transaction (e.g., a third transaction) to the blockchain network that may be used to deploy the under-chain smart contract to the blockchain network.
Further, the application server node may also issue an address of the joint learning smart contract for accepting a third party authority to review the joint learning program therein, thereby ensuring that the joint learning program is trusted. It should be appreciated that the address of the joint smart contract may be a hash value of the joint smart contract, and that the third party authority may find the joint learning smart contract in the blockchain network based on the hash value of the joint smart contract. The published information of the application server node may be as shown in table 2.
TABLE 2
In one example, a device participating in joint learning may obtain a joint learning smart contract through a blockchain network, where the joint learning smart contract is a smart contract that an application server node uses to deploy an off-chain smart contract to the blockchain network.
For example, the devices participating in the joint learning may install a joint learning application, the application server node may issue an address that may be associated with the joint learning smart contract to the joint learning application, and the devices participating in the joint learning may download the joint learning smart contract in the joint learning application.
It should be appreciated that the joint learning smart contracts deployed on the blockchain network are shared among the nodes of the blockchain network, so that malicious tampering of the joint learning smart contracts can be avoided to a certain extent, and the credibility of the joint smart contracts is ensured.
In one example, devices participating in the joint learning may also obtain a joint learning smart contract from an application server node. The application does not limit the specific source of the intelligent contract for the joint learning, which is obtained by the equipment, under the condition of ensuring the true credibility of the intelligent contract for the joint learning.
In the embodiment of the application, after the device acquires the joint learning intelligent contract, a first message can be sent to the application server node, wherein the first message is used for indicating the device to participate in the joint learning, namely the device can indicate to participate in the joint learning by sending the first message to the application server node, the first message also comprises a digital currency wallet address of the device, and after the device sends a local training result of the joint learning to the application server, the application server node can pay digital currency according to the digital currency wallet address of the device.
It should be appreciated that the digital money wallet address may be used to identify devices participating in joint learning. When a plurality of devices participate in the joint learning, the application server node distinguishes different devices according to the digital currency wallet addresses of the plurality of devices, so that after receiving training results of the joint learning sent by the plurality of devices, the application server node respectively pays digital currency to the plurality of devices.
In one example, the application server node, upon receiving the digital money wallet address of the device, may issue a transaction (e.g., a second transaction) to the blockchain network, deploy an off-link channel contract, establish an off-link channel between the application server node and the device, wherein the off-link channel is located outside of the blockchain network.
Further, the blockchain network sends address information of the status channel to the application server, the address information of the status channel being used to identify the status channel.
It should be noted that the channel address information may be an index value, and according to the index value, the application server node may find the established state channel. For example, the channel address information may be a hash value of the state channel.
It should be appreciated that the status channel (STATE CHANNEL) is an "off-chain" technique for performing transactions and other status updates, i.e., a status channel may refer to an under-chain channel of a blockchain network through which data or information sent by the under-chain channel need not be shared among the blockchain nodes, and an under-chain channel may be an off-chain channel established between two nodes. For example, the status channel may be a unidirectional under-link payment channel, for example, a unidirectional under-link payment channel may be established between the application server node and the device, where the unidirectional under-link payment channel is used for the application server node to pay digital currency to the device after receiving a training result sent by the device and participating in joint learning.
For example, the status channel may be a bi-directional link down channel, such as a bi-directional link down channel established between the application server node and the device, through which the device may participate in the local training results of the joint learning to the application server node, and the application server node may pay digital money to the device after receiving the training results sent by the device.
In an embodiment of the present application, the device is based on system on chip (SoC) hardware, and may provide a three-layer hardware security architecture including a rich running environment (rich execution environment, re), a trusted running environment (trusted execution environment, TEE), and a secure running environment (secure execution environment, SEE), where the re runs security insensitive programs to save security insensitive data, the TEE runs security sensitive programs and saves security sensitive data, and the SEE runs financial payment high security programs and saves financial payment high security data.
By way of example and not limitation, the SoC may provide at least one runtime environment, such as Trustzone, bowmore, eSE, or inSE, etc., may be referred to as TEE, and the eSE, or inSE runtime environment, as SEE.
For example, when the device is a smart phone, the SoC may be configured in the smart phone, through which the smart phone provides Trustzone, bowmore, eSE, inSE, and other operating environments.
By way of example and not limitation, the above-described SoC may support an instruction set that runs a reduced instruction set machine (ADVANCED RISC MACHINE, ARM) architecture, and SoCs that support an instruction set that runs an ARM architecture are referred to as ARM-based SoCs. For example, an ARM-based SoC may be configured on a device to provide a three-layer hardware security architecture for the device.
In one example, the joint learning smart contract is run under a trusted execution environment TEE to train the initial model parameters.
For example, the device may run a joint learning smart contract in a joint learning application under a trusted execution environment TEE to train the initial model parameters, generating training results for updating the initial model parameters.
Step 230, sending the training result to the application server node through a status channel, where the status channel refers to an off-link channel between the device and the application server node, and the off-link channel is located outside the blockchain network.
For example, the status channel may be a unidirectional under-link payment channel, for example, a unidirectional under-link payment channel may be established between the application server node and the device, where the unidirectional under-link payment channel is used for the application server node to pay digital currency to the device after receiving a training result sent by the device and participating in joint learning.
It should be understood that in embodiments of the present application, digital currency refers to electronic currency that relies on checksum cryptographic techniques to create, issue, and circulate.
For example, the state channel may be a bi-directional link channel, such as a bi-directional link channel established between the application server node and the device of the joint learning device, through which the device may send a local training result participating in the joint learning to the device, and the application server node may pay digital money to the device after receiving the training result sent by the device.
The application server node may adjust the initial model parameters according to the training results received from the device. Wherein a device may refer to one or more devices that participate in joint learning.
In an embodiment of the present application, after the device sends the training result to the application server node through the status channel, the application server node may pay digital money to the device according to the acquired digital money wallet address of the device.
For example, the application server node may send a transaction to the device via a status channel (e.g., an under-chain payment channel), where the total digital currency in the channel may be S (a+b=s), the digital currency of the application server node is a, the digital currency of the device is B, and the cost of a single training is t, then the digital currency of the application server node is a-t, and the digital currency of the device is b+t, through the transaction. In this transaction, a signature of the application server node is attached. After signing the transaction, the device can send the transaction to the blockchain network.
In one example, to ensure that the environment that pays the digital currency is secure and trusted, the device may receive the digital currency that the application server node pays through the state channel under a trusted execution environment TEE.
In an embodiment of the application, the device determines that the training task of joint learning is over, and the device may send a transaction (e.g., a first transaction) to the blockchain network, the first transaction may be used to indicate that the status channel is closed.
It will be appreciated that the device may receive a number of transactions sent by the application server node, and therefore the digital currency of the device in the status channel is continually incremented, requiring only up-to-date transactions to be maintained. When the device decides to close the status channel, it can sign the latest transaction, send it to the blockchain network, and take out the digital currency belonging to the device.
In the embodiment of the application, the training parameters generated after the equipment participates in the joint learning to perform the local training can be fed back to the application server node through the state channel, the requirement of high TPS in the joint learning and block chain combination scene can be met by adopting the state channel to send the training result, meanwhile, the training result of the equipment also has certain privacy, the sharing of each node in the block chain network can be avoided by sending the training result through the state channel, and the privacy of the equipment data can be protected.
The following describes a specific flow of the joint learning method in the embodiment of the present application with reference to fig. 3.
Fig. 3 is a flow chart of a joint learning method according to an embodiment of the present application. The method shown in fig. 3 includes steps 301 to 314, and steps 301 to 314 are described in detail below.
It should be appreciated that in fig. 3 the model demander may be the application server node shown in fig. 2, or may also be an application APP in the application server node. The device may be a terminal device, for example, a user device, a mobile device, a user terminal, a wireless communication device, a user equipment, or the like. The foregoing is illustrative of the present application and is not to be construed as limiting thereof.
Step 301, the device installs the joint learning APP locally.
Wherein, the joint learning APP may collect different personal privacy data for each different joint learning smart contract. And (5) data preprocessing. At the same time, provided initialization model data w of the model demander is received.
In one example, as shown in fig. 4, an ARM-based SoC may provide a three-layer hardware security architecture for a device. The joint learning App can ensure the safety of the data processing process in a TEE trusted environment. After the equipment is provided with a reliable joint learning App, a plurality of joint learning intelligent contracts can be loaded at the same time, so that the requirements of a plurality of model demanders are met.
For example, different personal privacy data collected by the joint learning APP may be stored in a database of local personal privacy data. The personal privacy data is stored in a TEE trusted environment, so that the safety of the personal privacy data can be protected from being stolen by other malicious software on one hand, and the personal privacy data provided for a user using equipment can be ensured to be trusted on the other hand, and the personal privacy data is really valuable to a model demander.
And 302, deploying an intelligent contract under a chain to the chain by a model demander, accepting the audit, ensuring the credibility and tracking.
For example, model demander deploys the intelligent contracts under the chain on the chain through transactions. At the same time, the address of the in-chain smart contract may be issued, for example, to the joint learning APP. Thereby accepting the third party institution to audit the joint learning program and ensuring the credibility.
It should be appreciated that the address of an off-chain smart contract may refer to a hash value of an off-chain smart contract from which the off-chain smart contract may be found. It should be noted that, unlike the common intelligent contracts in the blockchain, the under-chain intelligent contract is about that the execution environment of the intelligent contract is not on the chain, but on the off-chain mobile phone side. The combined learning can be combined with the blockchain and the intelligent contracts under the chain, and the credibility of the combined learning program can be ensured. The structural design of the intelligent contracts under the chain can be shown in table 1.
Step 303, the device may receive a recommendation of a program for joint learning of the intelligent contracts, and download an under-chain intelligent contract program deployed on the blockchain network.
Step 304, the device sends the digital currency wallet address to the model demander and indicates to the model demander to participate in the joint learning process.
Step 305, the model demander issues a transaction (e.g., a second transaction) to the blockchain network for deploying the off-chain channel contracts to establish a status channel (e.g., an off-chain payment channel) with the device.
The state channel established by the application server node and the device can be a unidirectional link payment channel used for the model demand party to pay digital currency to the device, or a bidirectional link payment channel used for the device to pay digital currency to the model demand party after receiving a training result sent by the device.
For example, the model demander has S in the one-way under-link payment channel. The off-link channel contract requires a transaction with the signature SigB of the user, which requires the signature SigA of the aggregate demand party, to close the payment channel. The total amount of this transaction is S, the amount of the address output to the requesting party is a, the amount of the device address is B, where a+b=s.
And 306, the model demand side receives channel address information sent by the blockchain network, wherein the channel address information is used for identifying the state channel.
Step 307, the model requirement sends channel address information to the device.
It should be noted that, in step 305, the model requirement sends a transaction to the blockchain network to request for establishing a status channel with the device, and when the blockchain network receives the transaction, the status channel between the model requirement party and the device, that is, the off-link channel located outside the blockchain network is deployed, and the address information of the status channel is sent to the model requirement party. After receiving the address information of the state channel, the model demand party sends the address information of the state channel to the equipment, and the equipment can identify the established state channel.
Step 308, the model requirement sends the initial model parameters to the device.
For example, the model demand direction device sends the initial model parameters of the present joint learning, and the initial parameters may be the value of the weight w of the neural network.
Step 309, the device performs an intelligent contract to perform a single step training locally, where the local single step training may be a random gradient descent optimization algorithm, so as to update the parameter w value.
In one example, as shown in FIG. 4, the execution environment of the intelligent contract under the joint learning chain is in a Virtual Machine (VM). The joint learning App can run in the under-chain smart contract VM using the user data of the device and the initial model parameters for a certain joint learning contract. The execution environment of the intelligent contracts under the chain can support not only the joint learning program, but also other intelligent contract programs under the chain.
Step 310, the model demander receives the training result sent by the device.
For example, the device may feed back training results to the model demander through a link-down channel established with the model demander.
Step 311, the model demander pays digital money to the device through a status channel (e.g., an under-chain payment channel).
For example, through a status channel, the model demander issues a transaction to the device, such that the total asset in the channel is S, where the demander has A and the device has B, and the cost of this single training is t. In the output of this transaction, the digital currency of the model demander is a-t and the digital currency of the device is b+t, where a+b=s. In this transaction, a signature of the model demander is attached. After the device signs the transaction, the transaction can be sent to the blockchain network, the channel is closed, and the digital currency belonging to the device is taken out.
In one example, the status channel may operate in a TEE environment, where the device may be in a secure environment, both receiving digital money from the model demander and sending model data updates to the model demander. While spending digital money, it requires signing in the SEE environment using the private key in the digital money wallet.
And 312, the model demander receives the training results fed back by the equipment participating in the joint learning, performs average processing to obtain the latest model parameters, returns to the step 306 and then transmits the latest model parameters to the equipment participating in the joint learning.
Step 313, the device can view the on-chain digital currency and the off-chain digital currency belonging to the device at any time.
The device may receive a number of transactions sent by the model demander, so that the digital currency of the device in the status channel is incremented, only the latest transactions need to be maintained.
And 314, when the joint learning ending device decides to close the channel, signing the latest transaction, sending the latest transaction to the blockchain network, and taking out the digital currency belonging to the device.
It should be noted that the example of fig. 3 is merely to aid one skilled in the art in understanding embodiments of the present application and is not intended to limit embodiments of the present application to the specific scenario illustrated. From the example of fig. 3 given, it will be apparent to those skilled in the art that various equivalent modifications or variations can be made, and such modifications or variations are intended to be within the scope of the embodiments of the application.
It should be understood that, in various embodiments of the present application, the sequence numbers of the foregoing processes do not mean the order of execution, and the order of execution of the processes should be determined by the functions and internal logic thereof, and should not constitute any limitation on the implementation process of the embodiments of the present application.
The above describes in detail the joint learning method according to the embodiment of the present application, in the embodiment of the present application, the training parameters generated after the device involved in the joint learning performs the local training may be fed back to the application server node through the state channel, the requirement of high TPS in the joint learning and blockchain combining scenario may be satisfied by transmitting the training result through the state channel, meanwhile, the training result of the device also has a certain privacy, the training result is transmitted through the state channel, which may avoid sharing of each node in the blockchain network, and may protect the privacy of the device data. It should be understood that the joint learning device according to the embodiment of the present application may perform the various methods of the foregoing embodiments of the present application, that is, specific working processes of the following various products may refer to corresponding processes in the foregoing method embodiments.
Fig. 5 is a schematic block diagram of a joint learning device 500 provided by an embodiment of the present application. It should be appreciated that the joint learning device 500 is capable of performing the various steps performed by the device in the method of fig. 2 or 3, and will not be described in detail herein to avoid repetition. The joint learning device 500 includes a receiving unit 510, a processing unit 520, and a transmitting unit 530.
The device comprises a receiving unit 510, a processing unit 520 and a sending unit 530, wherein the receiving unit is used for receiving initial model parameters sent by an application server node, the initial model parameters are parameters used for establishing an initial model by equipment participating in joint learning, the processing unit 520 is used for training the initial model parameters and generating a training result used for updating the initial model parameters, and the sending unit 530 is used for sending the training result to the application server node through a state channel, the state channel is an off-link channel between the equipment and the application server node, and the off-link channel is located outside a block chain network.
Optionally, as an embodiment, the receiving unit 510 is further configured to receive channel address information sent by the application server node, where the channel address information is used to identify the status channel.
Optionally, as an embodiment, the receiving unit 510 is further configured to obtain a joint learning smart contract, where the joint learning smart contract includes training logic instructions of the initial model parameters, and the processing unit 520 is specifically configured to run the joint learning smart contract to train the initial model parameters, and generate a training result for updating the initial model parameters.
Optionally, as an embodiment, the receiving unit 510 is further specifically configured to obtain the joint learning smart contract through a blockchain network, where the joint learning smart contract is about an under-chain smart contract part, and an execution environment of the under-chain smart contract is not an intelligent contract of the blockchain network.
Optionally, as one embodiment, the processing unit 520 is specifically configured to run the joint learning smart contract in a joint learning application to train the initial model parameters.
Optionally, as an embodiment, the processing unit 520 is specifically configured to run the joint learning smart contract under a trusted execution environment TEE to train the initial model parameters.
Optionally, as an embodiment, the receiving unit 510 is specifically configured to receive digital currency paid by the application server node through the status channel.
Optionally, as an embodiment, the receiving unit 510 is specifically configured to receive, under a trusted execution environment TEE, digital currency paid by the application server node through the status channel.
Optionally, as an embodiment, the sending unit 530 is further configured to send a first message to the application server node, where the first message is used to instruct the device to participate in joint learning, and the first message includes a digital money wallet address of the device.
Optionally, as an embodiment, the processing unit 520 is further configured to determine that the training task of joint learning is ended, and the sending unit 530 is further configured to send a first transaction to the blockchain network, where the first transaction is used to instruct to close the status channel.
It should be understood that the joint learning device 500 herein is embodied in the form of functional units. The term "unit" herein may be implemented in software and/or hardware, without specific limitation. For example, a "unit" may be a software program, a hardware circuit or a combination of both that implements the functions described above. The hardware circuitry may include Application Specific Integrated Circuits (ASICs), electronic circuits, processors (e.g., shared, proprietary, or group processors, etc.) and memory for executing one or more software or firmware programs, merged logic circuits, and/or other suitable components that support the described functions. Thus, the elements of the examples described in the embodiments of the present application can be implemented in electronic hardware, or in a combination of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
Fig. 6 is a schematic block diagram of a joint learning device 600 provided by an embodiment of the present application. It should be appreciated that the joint learning device 600 may be an application server node capable of performing the various steps performed by the application server node in the method of fig. 2 or 3, and will not be described in detail herein to avoid repetition. The joint learning device 600 includes a transmitting unit 610 and a receiving unit 620.
The sending unit 610 is configured to send initial model parameters to a device, where the initial model parameters are parameters of the device for establishing an initial model that participates in joint learning, and the receiving unit 620 is configured to receive a training result for updating the initial model parameters through a state channel, where the state channel is an off-link channel between the device and the application server node, and the off-link channel is located outside a blockchain network.
It should be understood that the application server node 600 may also comprise a processing unit, which may be used to control the receiving unit 620 and the transmitting unit 610 to perform the relevant steps.
Optionally, as an embodiment, the receiving unit 620 is further configured to receive channel address information sent by the blockchain network, where the channel address information is used to identify the status channel, and the sending unit 610 is further configured to send the channel address information to the device.
Optionally, as an embodiment, the sending unit 610 is further configured to send a second transaction to the blockchain network, where the second transaction is used to deploy the status channel.
Optionally, as an embodiment, the sending unit 610 is further configured to send a third transaction to a blockchain network, where the third transaction is used to deploy a joint learning smart contract to the blockchain network, the joint learning smart contract is an under-chain smart contract, an execution environment of the under-chain smart contract does not belong to the blockchain network, and training logic instructions of the initial model parameters are included in the joint learning smart contract.
Optionally, as an embodiment, the sending unit 610 is further configured to pay digital currency to the device via the status channel.
Optionally, as an embodiment, the receiving unit 620 is further configured to receive a first message sent by the device, where the first message is used to indicate that the device participates in joint learning, and the first message includes a digital money wallet address of the device.
It should be understood that the joint learning device 600 herein is embodied in the form of functional units. The term "unit" herein may be implemented in software and/or hardware, without specific limitation. For example, a "unit" may be a software program, a hardware circuit or a combination of both that implements the functions described above. The hardware circuitry may include Application Specific Integrated Circuits (ASICs), electronic circuits, processors (e.g., shared, proprietary, or group processors, etc.) and memory for executing one or more software or firmware programs, merged logic circuits, and/or other suitable components that support the described functions. Thus, the elements of the examples described in the embodiments of the present application can be implemented in electronic hardware, or in a combination of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
Fig. 7 shows a schematic block diagram of a joint learning device 700 of another embodiment of the present application. The joint learning device 700 may be a terminal device, as shown in fig. 7, the joint learning device 700 including a processor 720, a memory 760, a communication interface 740, and a bus 750. The processor 720, the memory 760, and the communication interface 740 communicate via the bus 750, or may communicate via other means such as wireless transmission. The memory 760 is used to store instructions and the processor 720 is used to execute the instructions stored by the memory 760. The memory 760 stores the program code 711, and the processor 720 may call the program code 711 stored in the memory 760 to perform the joint learning method shown in fig. 2 or 3.
For example, processor 720 may be configured to perform training model parameters as described above in 220 of FIG. 2 to generate training results, or 307 of FIG. 3 to perform a smart contract for local training.
The memory 760 may include read only memory and random access memory and provide instructions and data to the processor 720. Memory 760 may also include non-volatile random access memory. The memory 760 may be volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The nonvolatile memory may be a read-only memory (ROM), a Programmable ROM (PROM), an erasable programmable ROM (erasable PROM), an electrically erasable programmable EPROM (EEPROM), or a flash memory. The volatile memory may be random access memory (random access memory, RAM) which acts as external cache memory. By way of example, and not limitation, many forms of RAM are available, such as static random access memory (STATIC RAM, SRAM), dynamic Random Access Memory (DRAM), synchronous Dynamic Random Access Memory (SDRAM), double data rate synchronous dynamic random access memory (double DATA DATE SDRAM, DDR SDRAM), enhanced synchronous dynamic random access memory (ENHANCED SDRAM, ESDRAM), synchronous link dynamic random access memory (SYNCHLINK DRAM, SLDRAM), and direct memory bus random access memory (direct rambus RAM, DR RAM).
The bus 750 may include a power bus, a control bus, a status signal bus, and the like in addition to a data bus. But for clarity of illustration, the various buses are labeled as bus 750 in fig. 7.
It should be appreciated that the joint learning device 700 shown in fig. 7 is capable of implementing the various processes performed by the devices in the method embodiments shown in fig. 2 and 3. The operations and/or functions of the respective modules in the joint learning device 700 are respectively implemented to implement the corresponding procedures of the device in the above-described method embodiment. Reference is specifically made to the description in the above method embodiments, and detailed descriptions are omitted here as appropriate to avoid repetition.
Fig. 8 shows a schematic block diagram of a joint learning device 800 of another embodiment of the present application. The joint learning device 800 may be an application server node, as shown in fig. 8, the joint learning device 800 comprising a processor 820, a memory 860, a communication interface 840 and a bus 850. The processor 820, the memory 860, and the communication interface 840 communicate via the bus 850, or may communicate via other means such as wireless transmission. The memory 860 is configured to store instructions and the processor 820 is configured to execute the instructions stored by the memory 860. The memory 860 stores program codes 811, and the processor 820 may call the program codes 811 stored in the memory 860 to execute the joint learning method shown in fig. 2 or 3.
Among them, the communication interface 840 as shown in fig. 8 may correspond to the receiving unit 620 and the transmitting unit 610 in the terminal device shown in fig. 6.
The memory 820 may include read only memory and random access memory, and provides instructions and data to the processor 820. The memory 860 may also include non-volatile random access memory. The memory 860 may be volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The nonvolatile memory may be a read-only memory (ROM), a Programmable ROM (PROM), an erasable programmable ROM (erasable PROM), an electrically erasable programmable EPROM (EEPROM), or a flash memory. The volatile memory may be random access memory (random access memory, RAM) which acts as external cache memory. By way of example, and not limitation, many forms of RAM are available, such as static random access memory (STATIC RAM, SRAM), dynamic Random Access Memory (DRAM), synchronous Dynamic Random Access Memory (SDRAM), double data rate synchronous dynamic random access memory (double DATA DATE SDRAM, DDR SDRAM), enhanced synchronous dynamic random access memory (ENHANCED SDRAM, ESDRAM), synchronous link dynamic random access memory (SYNCHLINK DRAM, SLDRAM), and direct memory bus random access memory (direct rambus RAM, DR RAM).
The bus 850 may include a power bus, a control bus, a status signal bus, and the like, in addition to a data bus. But for clarity of illustration, the various buses are labeled as bus 850 in fig. 8.
It should be appreciated that the joint learning device 800 shown in fig. 8 is capable of implementing the various processes performed by the application server node in the method embodiments shown in fig. 2 and 3. The operations and/or functions of the respective modules in the joint learning device 800 are respectively for implementing the respective flows of the application server node in the above-described method embodiment. Reference is specifically made to the description in the above method embodiments, and detailed descriptions are omitted here as appropriate to avoid repetition.
The present application also provides a computer readable storage medium having instructions stored therein which, when executed on a computer, cause the computer to perform the steps of the joint learning method described above and shown in fig. 2 and 3.
The present application also provides a computer program product comprising instructions which, when run on a computer or on any of the at least one processors, cause the computer to perform the steps of the joint learning method as shown in figures 2, 3.
The application also provides a chip comprising a processor. The processor is used for reading and running the computer program stored in the memory to execute the corresponding operation and/or flow executed by the method for waking up the screen.
Optionally, the chip further comprises a memory, the memory is connected with the processor through a circuit or a wire, and the processor is used for reading and executing the computer program in the memory. Further optionally, the chip further comprises a communication interface, and the processor is connected to the communication interface. The communication interface is used for receiving data and/or information to be processed, and the processor acquires the data and/or information from the communication interface and processes the data and/or information. The communication interface may be an input-output interface.
In the above embodiments, the processor may include, for example, a central processing unit (central processing unit, CPU), a microprocessor, a microcontroller, or a digital signal processor, and may further include a GPU, an NPU, and an ISP, and the processor may further include a necessary hardware accelerator or a logic processing hardware circuit, such as an application-specific integrated circuit (ASIC), or one or more integrated circuits for controlling the execution of the program according to the present application. Further, the processor may have the function of operating one or more software programs, which may be stored in the memory.
The memory may be read-only memory (ROM), other types of static storage devices that can store static information and instructions, random access memory (random access memory, RAM) or other types of dynamic storage devices that can store information and instructions, electrically erasable programmable read-only memory (ELECTRICALLY ERASABLE PROGRAMMABLE READ-only memory, EEPROM), compact disc read-only memory (compact disc read-only memory) or other optical disk storage, optical disk storage (including compact disc, laser disc, optical disc, digital versatile disc, blu-ray disc, etc.), magnetic disk storage media, or any other media that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, etc.
In the embodiments of the present application, "at least one" means one or more, and "a plurality" means two or more. "and/or", describes an association relation of association objects, and indicates that there may be three kinds of relations, for example, a and/or B, and may indicate that a alone exists, a and B together, and B alone exists. Wherein A, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship. "at least one of the following" and the like means any combination of these items, including any combination of single or plural items. For example, at least one of a, b and c may represent a, b, c, a-b, a-c, b-c, or a-b-c, wherein a, b, c may be single or plural.
Those of ordinary skill in the art will appreciate that the various elements and algorithm steps described in the embodiments disclosed herein can be implemented as a combination of electronic hardware, computer software, and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
In several embodiments provided by the present application, any of the functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. The storage medium includes a U disk, a removable hard disk, a read-only memory (ROM), a random access memory (random access memory, RAM), a magnetic disk, an optical disk, or other various media capable of storing program codes.
The foregoing is merely exemplary embodiments of the present application, and any person skilled in the art may easily conceive of changes or substitutions within the technical scope of the present application, which should be covered by the present application. The protection scope of the present application shall be subject to the protection scope of the claims.