CN113760578B - Method, device, equipment and computer program for cross-process rapid transmission of big data - Google Patents
Method, device, equipment and computer program for cross-process rapid transmission of big data Download PDFInfo
- Publication number
- CN113760578B CN113760578B CN202110998847.3A CN202110998847A CN113760578B CN 113760578 B CN113760578 B CN 113760578B CN 202110998847 A CN202110998847 A CN 202110998847A CN 113760578 B CN113760578 B CN 113760578B
- Authority
- CN
- China
- Prior art keywords
- data
- shared memory
- label information
- waiting task
- file descriptor
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/54—Interprogram communication
- G06F9/543—User-generated data transfer, e.g. clipboards, dynamic data exchange [DDE], object linking and embedding [OLE]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/54—Interprogram communication
- G06F9/544—Buffers; Shared memory; Pipes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Software Systems (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Artificial Intelligence (AREA)
- Mathematical Physics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Information Transfer Between Computers (AREA)
Abstract
The invention discloses a method, a device, equipment and a computer program for cross-process fast transmission of big data, relating to the technical field of interprocess communication and message transmission, wherein the method comprises the steps of establishing a waiting task set according to a first-in first-out principle; each host process in one or more host processes writes a group of data by using a shared memory region, and a file descriptor of the shared memory region is transmitted to one or more service processes through a binder mechanism; acquiring label information of a group of data correspondingly written in each host process; calculating by using a neural network according to all the label information to obtain the label information of one or more groups of interested data of each service process corresponding to the waiting task; and each service process corresponding to one or more currently running waiting tasks reads one or more groups of data interested by each service process from the shared memory area through the file descriptor. The invention has the advantages of high data transmission efficiency and high intelligent degree of multi-process reading and writing.
Description
Technical Field
The invention relates to the technical field of interprocess communication and message transmission, in particular to a method, a device, equipment and a computer program for rapidly transmitting big data in a cross-process manner, and particularly relates to a method, a device, equipment, a storage medium and a computer program for rapidly transmitting big data in a cross-process manner based on an android system application.
Background
At present, some android apps are interacted in a multi-process mode, are well-arranged, and can also ensure the stability of a host process. The most common mode of android multi-process communication is a binder mechanism, and the traditional modes such as socket, pipeline and message are relatively low in efficiency. Binder transfers data with a size limit (1M) and the Binder needs to copy the data once (from the sending process to the kernel space), which is not very efficient. Thus, binder is not suitable for scenarios where large data (more than 1M) is transmitted quickly. In addition, the file sharing can also realize cross-process large data communication, however, the mode needs to copy data twice at the bottom layer, and the efficiency is extremely low.
In order to overcome the above drawbacks, several attempts have been made in the prior art, for example, chinese patent CN112802232A discloses a method and related device for transmitting video stream data, in which an interprocess communication connection is established between an autopilot device and a vehicle terminal, a call back of a third party device is registered based on an interprocess communication mechanism, so that the video stream data can be transmitted from the third party device to a shared memory, and the autopilot device receives a shared file descriptor sent by the vehicle terminal and obtains the video stream data according to the shared file descriptor. Although the method improves the data transmission efficiency by sharing the memory, the process can directly read and write the memory without any data copy. However, when a plurality of processes use shared memory for communication, synchronization and mutual exclusion of read and write data are involved, which leads to a decrease in the overall communication efficiency.
Disclosure of Invention
Therefore, in order to overcome the above-mentioned drawbacks, embodiments of the present invention provide a method, an apparatus, a device, a storage medium, and a computer program for quickly transmitting big data across processes, which can improve data transmission efficiency and improve efficiency of multi-process data reading and writing.
Therefore, the method for quickly transmitting the big data by applying the cross-process of the embodiment of the invention comprises the following steps:
s1, establishing a waiting task set according to a first-in first-out principle, wherein each waiting task element in the waiting task set corresponds to one service process in one or more than two service processes;
s2, each host process in one or more host processes writes a group of data by using a shared memory area, and the file descriptor of the shared memory area is transmitted to one or more service processes through a binder mechanism;
s3, obtaining label information of a group of data written in by each host process, wherein the label information comprises a data type, a data representation object, a data acquisition position and a data acquisition scene;
s4, calculating by using a neural network according to all the label information to obtain the label information of one or more groups of interested data of each service process corresponding to the waiting task;
s5, each service process corresponding to the currently running one or more waiting tasks reads one or more groups of data interested by each service process from the shared memory area through the file descriptor.
Preferably, the step of writing a group of data into each host process by using the shared memory region, and transmitting the file descriptor of the shared memory region to the service process through the binder mechanism, includes:
s21, the host process creates a shared memory area with a fixed size, and obtains a file descriptor of the shared memory area;
and S22, the host process transmits the file descriptor to the service process through a binder mechanism.
Preferably, the host process creates a shared memory area with a fixed size, and the step of obtaining the file descriptor of the shared memory area includes:
s211, the host process acquires a system interface for creating the shared memory area through a reflection mechanism;
s212, the host process creates a shared memory area with a fixed size through the system interface, and obtains a file descriptor of the shared memory area.
Preferably, the step of obtaining tag information of one or more groups of data of interest corresponding to the waiting task for each service process by performing calculation using a neural network according to all tag information includes:
s41, obtaining label information of a plurality of groups of preset data to form a sample set, and recording label information corresponding to one or more groups of preset data which are interested in the waiting task;
s42, taking the sample set as an input sample, training and testing the multilayer neural network until the output label information is consistent with the recorded label information interested by the waiting task, and obtaining a trained neural network model;
and S43, sequentially inputting all the label information serving as input quantity into the trained neural network model, and obtaining the label information of one or more groups of interested data of each service process corresponding to the waiting task.
Preferably, the step of reading, by the file descriptor, one or more groups of data of interest of each service process corresponding to the currently running one or more waiting tasks from the shared memory area includes:
s51-1, the service process corresponding to each waiting task in the one or more waiting tasks running at present sends a data reading inquiry message to the host process through a binder mechanism;
s51-2, when receiving the feedback message of the data reading inquiry message sent by the host process through the binder mechanism, the service process creates a corresponding byte stream through the file descriptor, and reads one or more groups of data which are interested by the service process corresponding to the waiting task from the shared memory area.
The device for quickly transmitting big data by applying cross-process of the embodiment of the invention comprises the following steps:
the system comprises a waiting task set creating unit, a task processing unit and a task processing unit, wherein the waiting task set creating unit is used for creating a waiting task set according to a first-in first-out principle, and each waiting task element in the waiting task set corresponds to one service process in one or more than two service processes;
the data writing unit is used for writing a group of data into each host process in one or more host processes by using a shared memory region, and transmitting a file descriptor of the shared memory region to one or more service processes through a binder mechanism;
the system comprises a tag information acquisition unit, a data storage unit and a data processing unit, wherein the tag information acquisition unit is used for acquiring tag information of a group of data correspondingly written in by each host process, and the tag information comprises a data type, a data representation object, a data acquisition position, a data acquisition scene and the like;
the neural network selecting unit is used for calculating by using the neural network according to all the label information to obtain label information of one or more groups of interested data of each service process corresponding to the waiting task;
and the data reading unit is used for reading one or more groups of data which are interested by each service process corresponding to one or more waiting tasks currently running from the shared memory area through the file descriptor.
Preferably, the data writing unit includes:
a file descriptor obtaining unit, configured to create a shared memory region with a fixed size by a host process, and obtain a file descriptor of the shared memory region;
and the file descriptor transmission unit is used for transmitting the file descriptor to the service process by the host process through a binder mechanism.
The invention provides a device for quickly transmitting big data by applying cross-process, which comprises:
one or more processors;
storage means for storing one or more programs;
when executed by the one or more processors, the one or more programs cause the one or more processors to implement the above-described method for quickly transferring big data across processes by an application.
The storage medium is stored with instructions, and is characterized in that the instructions, when executed by a processor, implement the above method for quickly transmitting big data across processes.
A computer program of an embodiment of the present invention, stored on a computer-readable storage medium and adapted to be executed on a computer, is characterized in that the computer program comprises instructions adapted to perform the above-mentioned steps of the method for applying fast transmission of large data across processes when the computer program runs on the computer.
The method, the device, the equipment, the storage medium and the computer program for rapidly transmitting the big data in a cross-process manner have the following advantages that:
1. by writing each shared memory area into a group of data of one host process, a shared memory area is opened up for each group of data of each host process, and when the multi-service process reads each group of data, synchronization and mutual exclusion of read-write data are avoided, so that the communication efficiency, the multi-process read-write data efficiency and the data transmission efficiency are improved.
2. The data can be artificially and intelligently selected through the neural network, and the data which are interested by the waiting task can be artificially and intelligently selected from a plurality of groups of data input by the host process, so that the service process corresponding to the waiting task can read the data which are interested, and the intelligent degree of rapid large data transmission is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a flowchart of a specific example of a method for quickly transmitting big data across processes in embodiment 1 of the present invention;
fig. 2 is a flowchart of another specific example of a method for quickly transmitting big data across processes in embodiment 1 of the present invention;
fig. 3 is a flowchart of another specific example of a method for quickly transmitting big data across processes in embodiment 1 of the present invention;
fig. 4 is a schematic block diagram of a specific example of an apparatus for quickly transmitting big data across processes in embodiment 2 of the present invention.
Detailed Description
The technical solutions of the present invention will be described clearly and completely with reference to the accompanying drawings, and it should be understood that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In describing the present invention, it is to be understood that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. The terms "comprises" and/or "comprising," when used in this specification, are intended to specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. The term "and/or" includes any and all combinations of one or more of the associated listed items. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
Furthermore, certain drawings in this specification are flow charts illustrating methods. It will be understood that each block of the flowchart illustrations, and combinations of blocks in the flowchart illustrations, can be implemented by computer program instructions. These computer program instructions may be loaded onto a computer or other programmable apparatus to produce a machine, such that the instructions which execute on the computer or other programmable apparatus create means for implementing the functions specified in the flowchart block or blocks. These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart block or blocks. The computer program instructions may also be loaded onto a computer or other programmable apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart block or blocks.
Accordingly, blocks of the flowchart illustrations support combinations of means for performing the specified functions and combinations of steps for performing the specified functions. It will also be understood that each block of the flowchart illustrations, and combinations of blocks in the flowchart illustrations, can be implemented by special purpose hardware-based computer systems which perform the specified functions or steps, or combinations of special purpose hardware and computer instructions.
In addition, the technical features involved in the different embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
Example 1
The embodiment provides a method for quickly transmitting big data across processes, which may be applied to an android system, for example, in a scenario where big data needs to be quickly transmitted during multi-process communication, for example, a host process needs to transmit preview data of a camera to a service process for calculation (the preview data of the camera is generally 1280 × 720, and each frame is about 30ms on average), as shown in fig. 1, the method includes the following steps:
s1, establishing a waiting task set according to a first-in first-out principle, wherein each waiting task element in the waiting task set corresponds to one service process in one or more than two service processes, namely each service process corresponds to one waiting task;
s2, each host process in one or more host processes writes a group of data by using a shared memory area, and the file descriptor of the shared memory area is transmitted to one or more service processes through a binder mechanism;
s3, obtaining label information of a group of data correspondingly written in each host process, wherein the label information comprises a data type, a data representation object, a data acquisition position, a data acquisition scene and the like; for example, the host process needs to transmit preview data of the camera, a data representation object of label information of the preview data is a camera irradiation target, a data acquisition position is a position where the camera is located, and a scene of the data acquisition position is a scene where the camera is located;
s4, calculating by using a neural network according to all the label information to obtain the label information of one or more groups of interested data of the waiting task corresponding to each service process, thereby selecting the interested data of the waiting task from the input multiple groups of data;
s5, each service process corresponding to one or more waiting tasks running at present reads one or more groups of interested data from the shared memory area through the file descriptor, and multi-process high-efficiency big data transmission is achieved.
According to the method for rapidly transmitting the big data by applying the cross-process, each shared memory area is written into one group of data of one host process, one shared memory area is opened up for each group of data of each host process, and the multi-service process does not have synchronization and mutual exclusion of read-write data when reading each group of data, so that the communication efficiency, the multi-process data read-write efficiency and the data transmission efficiency are improved. The data can be artificially and intelligently selected through the neural network, and the data which are interested by the waiting task can be artificially and intelligently selected from a plurality of groups of data input by the host process, so that the service process corresponding to the waiting task can read the data which are interested, and the intelligent degree of rapid large data transmission is improved.
Preferably, as shown in fig. 2, each host process in S2 writes a set of data by using the shared memory region, and the step of passing the file descriptor of the shared memory region to the service process through the binder mechanism includes:
s21, the host process creates a shared memory area with a fixed size, and obtains a file descriptor of the shared memory area; preferably, the bottom layer of the memory sharing mechanism is realized by mmap (memory mapping), data copying is not needed, the speed is higher than the binder, and the memory sharing mechanism is not limited by the size of the transmitted data.
And S22, the host process transmits the file descriptor to the service process through a binder mechanism.
Preferably, the host process of S21 creates a shared memory region with a fixed size, and the step of obtaining the file descriptor of the shared memory region includes:
s211, the host process acquires a system interface for creating the shared memory area through a reflection mechanism;
s212, the host process creates a shared memory area with a fixed size through the system interface, and obtains a file descriptor of the shared memory area. Because a complete interface is not opened in the system interface of the application layer to realize memory sharing, the memory can not be directly used through a normal development means, and therefore, a relevant system interface needs to be acquired through a reflection mechanism, and the creation of a shared memory area is realized.
Preferably, the step of S4, according to all the tag information, performing calculation by using a neural network to obtain the tag information of one or more sets of data of interest corresponding to the waiting task for each service process, includes:
s41, obtaining label information of a plurality of groups of preset data to form a sample set, and recording label information corresponding to one or more groups of preset data which are interested in the waiting task;
s42, taking the sample set as an input sample, training and testing the multilayer neural network until the output label information is consistent with the recorded label information interested by the waiting task, and obtaining a trained neural network model; preferably, the multi-layer neural network is a three-layer structure, the first layer is an input layer, the input quantity is label information, the second layer is a hidden layer, the third layer is an output layer, and the output quantity is label information of one or more groups of data interested in the waiting task.
And S43, sequentially inputting all the label information serving as input quantity into the trained neural network model, and obtaining the label information of one or more groups of interested data of each service process corresponding to the waiting task.
Preferably, as shown in fig. 3, the step of reading, by the file descriptor, one or more sets of data of interest from the shared memory area by each service process corresponding to one or more waiting tasks currently running in S5 includes:
s51-1, the service process corresponding to each waiting task in the one or more waiting tasks currently running sends a data reading inquiry message to the host process through a binder mechanism, and the inquiry message is used for confirming whether the host process completes the writing of the data;
s51-2, when receiving the feedback message of the data reading inquiry message sent by the host process through the binder mechanism, the service process indicates that the host process has completed writing the data, allows the service process to read, creates a corresponding byte stream through the file descriptor, and reads one or more groups of data in which the service process is interested by the waiting task from the shared memory area.
Preferably, in addition to the specific steps of S51-1 to S51-2, the step of reading, by the file descriptor, one or more groups of data of interest of each service process corresponding to one or more waiting tasks currently running in S5 from the shared memory area may further include the following specific steps:
s52-1, a service process corresponding to each waiting task in one or more waiting tasks running at present starts a dead loop to read one or more groups of data interested by the service process from the shared memory area through a file descriptor, both the service process and the host process agree on an identifier (such as a serial number and the like), and whether the data in the shared memory area changes or not is judged through the change of the identifier.
Example 2
Corresponding to embodiment 1, this embodiment provides an apparatus for quickly transmitting big data across processes, as shown in fig. 4, including:
the system comprises a waiting task set creating unit 1, a task processing unit and a task processing unit, wherein the waiting task set creating unit is used for creating a waiting task set according to a first-in first-out principle, and each waiting task element in the waiting task set corresponds to one service process in one or more than two service processes;
the data writing unit 2 is used for writing a group of data into each host process in one or more host processes by using a shared memory region, and transmitting a file descriptor of the shared memory region to one or more service processes through a binder mechanism;
the tag information acquiring unit 3 is configured to acquire tag information of a set of data written in by each host process, where the tag information includes a data type, a data representation object, a data acquisition position, a data acquisition scene, and the like;
the neural network selecting unit 4 is used for calculating by using the neural network according to all the label information to obtain label information of one or more groups of interested data of each service process corresponding to the waiting task;
and the data reading unit 5 is used for reading one or more groups of data which are interested by each service process corresponding to one or more waiting tasks currently running from the shared memory area through the file descriptor.
According to the device for rapidly transmitting the big data by applying the cross-process, each shared memory area is written into one group of data of one host process, one shared memory area is opened up for each group of data of each host process, and the multi-service process does not have synchronization and mutual exclusion of read-write data when reading each group of data, so that the communication efficiency, the multi-process data read-write efficiency and the data transmission efficiency are improved. The data can be artificially and intelligently selected through the neural network, and the data which are interested by the waiting task can be artificially and intelligently selected from a plurality of groups of data input by the host process, so that the service process corresponding to the waiting task can read the data which are interested, and the intelligent degree of rapid large data transmission is improved.
Preferably, the data writing unit includes:
a file descriptor obtaining unit, configured to create a shared memory region with a fixed size by a host process, and obtain a file descriptor of the shared memory region;
and the file descriptor transmission unit is used for transmitting the file descriptor to the service process by the host process through a binder mechanism.
Preferably, the file descriptor obtaining unit includes:
the system interface acquisition unit is used for acquiring a system interface for establishing a shared memory area by a host process through a reflection mechanism;
and the file descriptor obtaining subunit is used for creating a shared memory area with a fixed size through the system interface by the host process and obtaining the file descriptor of the shared memory area.
Preferably, the neural network selecting unit includes:
the sample set constructing unit is used for acquiring label information of a plurality of groups of preset data to form a sample set and recording label information corresponding to one or more groups of preset data which are interested in the waiting task;
the training unit is used for training and testing the multilayer neural network by taking the sample set as an input sample until the output label information is consistent with the recorded label information interested by the waiting task, and obtaining a trained neural network model;
and the interested label information obtaining unit is used for sequentially inputting all the label information serving as input quantity into the trained neural network model and obtaining the label information of one or more groups of interested data of the waiting task corresponding to each service process.
Preferably, the data reading unit includes:
the query unit is used for sending a data reading query message to the host process by a binder mechanism by the service process corresponding to each waiting task in one or more waiting tasks currently running;
and the confirmation feedback unit is used for creating a corresponding byte stream through the file descriptor when the service process receives the feedback message of the data reading inquiry message sent by the host process through the binder mechanism, and reading one or more groups of data which are interested by the service process and correspond to the waiting task from the shared memory area.
Or the data reading unit includes:
the dead cycle reading unit is used for starting a dead cycle by a service process corresponding to each waiting task in one or more than two waiting tasks currently running to read one or more than two groups of interested data from the shared memory area through a file descriptor, the service process and the host process agree on an identifier (such as a serial number) and judge whether the data in the shared memory area are changed or not through the change of the identifier.
Example 3
The embodiment provides a device for quickly transmitting big data across processes, which includes:
one or more processors;
storage means for storing one or more programs;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method for rapid transfer of large data across processes by an application of embodiment 1.
Example 4
The present embodiment provides a storage medium, where instructions are stored on the storage medium, and when the instructions are executed by a processor, the method for quickly transmitting big data across processes by an application in embodiment 1 is implemented.
Example 5
The present embodiment provides a computer program stored on a computer-readable storage medium and adapted to be executed on a computer, the computer program comprising instructions adapted to perform the steps of the method of embodiment 1 for fast transfer of large data across processes by an application when the computer program runs on the computer.
It should be understood that the above examples are only for clarity of illustration and are not intended to limit the embodiments. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. And obvious variations or modifications therefrom are within the scope of the invention.
Claims (8)
1. A method for quickly transmitting big data by applying cross-process is characterized by comprising the following steps:
s1, establishing a waiting task set according to a first-in first-out principle, wherein each waiting task element in the waiting task set corresponds to one service process in one or more than two service processes;
s2, each host process in one or more host processes writes a group of data by using a shared memory area, and the file descriptor of the shared memory area is transmitted to one or more service processes through a binder mechanism;
s3, obtaining label information of a group of data written in by each host process, wherein the label information comprises a data type, a data representation object, a data acquisition position and a data acquisition scene;
s4, calculating by using a neural network according to all the label information to obtain the label information of one or more groups of interested data of each service process corresponding to the waiting task;
s5, reading one or more groups of interested data from the shared memory area by each service process corresponding to one or more waiting tasks currently running through the file descriptor;
the step of calculating by using a neural network according to all the tag information to obtain the tag information of one or more groups of interested data of the waiting task corresponding to each service process comprises the following steps:
s41, obtaining label information of a plurality of groups of preset data to form a sample set, and recording label information corresponding to one or more groups of preset data which are interested in the waiting task;
s42, taking the sample set as an input sample, training and testing the multilayer neural network until the output label information is consistent with the recorded label information interested by the waiting task, and obtaining a trained neural network model;
and S43, sequentially inputting all the label information serving as input quantity into the trained neural network model, and obtaining the label information of one or more groups of interested data of each service process corresponding to the waiting task.
2. The method of claim 1, wherein each host process writes a set of data using a shared memory region, and wherein the step of passing the file descriptor of the shared memory region to the service process via a binder mechanism comprises:
s21, the host process creates a shared memory area with a fixed size, and obtains a file descriptor of the shared memory area;
and S22, the host process transmits the file descriptor to the service process through a binder mechanism.
3. The method of claim 2, wherein the host process creates a fixed-size shared memory region, and wherein obtaining the file descriptor of the shared memory region comprises:
s211, the host process acquires a system interface for creating the shared memory area through a reflection mechanism;
s212, the host process creates a shared memory area with a fixed size through the system interface, and obtains a file descriptor of the shared memory area.
4. The method according to claim 1, wherein the step of reading, by a file descriptor, one or more sets of data of interest of each service process corresponding to the currently running one or more waiting tasks from the shared memory area comprises:
s51-1, the service process corresponding to each waiting task in the one or more waiting tasks running at present sends a data reading inquiry message to the host process through a binder mechanism;
s51-2, when receiving the feedback message of the data reading inquiry message sent by the host process through the binder mechanism, the service process creates a corresponding byte stream through the file descriptor, and reads one or more groups of data which are interested by the service process corresponding to the waiting task from the shared memory area.
5. An apparatus for fast transferring big data across processes, comprising:
the system comprises a waiting task set creating unit, a task processing unit and a task processing unit, wherein the waiting task set creating unit is used for creating a waiting task set according to a first-in first-out principle, and each waiting task element in the waiting task set corresponds to one service process in one or more than two service processes;
the data writing unit is used for writing a group of data into each host process in one or more host processes by using a shared memory region, and transmitting a file descriptor of the shared memory region to one or more service processes through a binder mechanism;
the system comprises a tag information acquisition unit, a data storage unit and a data processing unit, wherein the tag information acquisition unit is used for acquiring tag information of a group of data correspondingly written in by each host process, and the tag information comprises a data type, a data representation object, a data acquisition position, a data acquisition scene and the like;
the neural network selecting unit is used for calculating by using the neural network according to all the label information to obtain label information of one or more groups of interested data of each service process corresponding to the waiting task;
the data reading unit is used for reading one or more groups of interested data from the shared memory area by each service process corresponding to one or more waiting tasks currently running through the file descriptor;
the neural network selecting unit includes:
the sample set constructing unit is used for acquiring label information of a plurality of groups of preset data to form a sample set and recording label information corresponding to one or more groups of preset data which are interested in the waiting task;
the training unit is used for training and testing the multilayer neural network by taking the sample set as an input sample until the output label information is consistent with the recorded label information interested by the waiting task, and obtaining a trained neural network model;
and the interested label information obtaining unit is used for sequentially inputting all the label information serving as input quantity into the trained neural network model and obtaining the label information of one or more groups of interested data of the waiting task corresponding to each service process.
6. The apparatus of claim 5, wherein the data writing unit comprises:
a file descriptor obtaining unit, configured to create a shared memory region with a fixed size by a host process, and obtain a file descriptor of the shared memory region;
and the file descriptor transmission unit is used for transmitting the file descriptor to the service process by the host process through a binder mechanism.
7. An apparatus for fast transferring big data across processes, comprising:
one or more processors;
storage means for storing one or more programs;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method for rapid transfer of large data across processes by an application as recited in any of claims 1-4.
8. A storage medium having stored thereon instructions that, when executed by a processor, implement a method for rapid transfer of large data across processes by an application according to any one of claims 1-4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110998847.3A CN113760578B (en) | 2021-08-28 | 2021-08-28 | Method, device, equipment and computer program for cross-process rapid transmission of big data |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110998847.3A CN113760578B (en) | 2021-08-28 | 2021-08-28 | Method, device, equipment and computer program for cross-process rapid transmission of big data |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113760578A CN113760578A (en) | 2021-12-07 |
CN113760578B true CN113760578B (en) | 2022-04-19 |
Family
ID=78791597
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110998847.3A Active CN113760578B (en) | 2021-08-28 | 2021-08-28 | Method, device, equipment and computer program for cross-process rapid transmission of big data |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113760578B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114401265A (en) * | 2021-12-15 | 2022-04-26 | 中孚安全技术有限公司 | TCP transparent proxy implementation method, system and device based on remote desktop protocol |
CN115016957B (en) * | 2022-05-26 | 2024-03-22 | 湖南三一智能控制设备有限公司 | Method, device, terminal and vehicle for cross-process memory sharing |
CN115357410B (en) * | 2022-08-24 | 2024-03-29 | 锐仕方达人才科技集团有限公司 | Data cross-process compressed storage method and system based on big data |
CN116709609B (en) * | 2022-09-30 | 2024-05-14 | 荣耀终端有限公司 | Message delivery method, electronic device and storage medium |
Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102004675A (en) * | 2010-11-11 | 2011-04-06 | 福建星网锐捷网络有限公司 | Cross-process data transmission method, device and network equipment |
CN109213611A (en) * | 2018-08-01 | 2019-01-15 | 天津字节跳动科技有限公司 | The striding course means of communication, device, terminal and storage medium |
CN109508246A (en) * | 2018-06-25 | 2019-03-22 | 广州多益网络股份有限公司 | Log recording method, system and computer readable storage medium |
CN109669784A (en) * | 2017-10-13 | 2019-04-23 | 华为技术有限公司 | A kind of method and system of interprocess communication |
US10474512B1 (en) * | 2016-09-29 | 2019-11-12 | Amazon Technologies, Inc. | Inter-process intra-application communications |
CN111506436A (en) * | 2020-03-25 | 2020-08-07 | 炬星科技(深圳)有限公司 | Method for realizing memory sharing, electronic equipment and shared memory data management library |
CN111651286A (en) * | 2020-05-27 | 2020-09-11 | 泰康保险集团股份有限公司 | Data communication method, device, computing equipment and storage medium |
CN111897666A (en) * | 2020-08-05 | 2020-11-06 | 北京图森未来科技有限公司 | Method, apparatus and system for communication between multiple processes |
CN111984430A (en) * | 2019-05-22 | 2020-11-24 | 厦门雅迅网络股份有限公司 | Many-to-many process communication method and computer readable storage medium |
CN112506684A (en) * | 2021-02-05 | 2021-03-16 | 全时云商务服务股份有限公司 | Method, system and storage medium for quickly transmitting big data across processes |
CN112749023A (en) * | 2019-10-30 | 2021-05-04 | 阿里巴巴集团控股有限公司 | Information processing method, device, equipment and system |
CN112802232A (en) * | 2021-03-22 | 2021-05-14 | 智道网联科技(北京)有限公司 | Video stream data transmission method and related device thereof |
CN112906075A (en) * | 2021-03-15 | 2021-06-04 | 北京字节跳动网络技术有限公司 | Memory sharing method and device |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20200364100A1 (en) * | 2019-05-14 | 2020-11-19 | Microsoft Technology Licensing, Llc | Memory abstraction for lock-free inter-process communication |
-
2021
- 2021-08-28 CN CN202110998847.3A patent/CN113760578B/en active Active
Patent Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102004675A (en) * | 2010-11-11 | 2011-04-06 | 福建星网锐捷网络有限公司 | Cross-process data transmission method, device and network equipment |
US10474512B1 (en) * | 2016-09-29 | 2019-11-12 | Amazon Technologies, Inc. | Inter-process intra-application communications |
CN109669784A (en) * | 2017-10-13 | 2019-04-23 | 华为技术有限公司 | A kind of method and system of interprocess communication |
CN109508246A (en) * | 2018-06-25 | 2019-03-22 | 广州多益网络股份有限公司 | Log recording method, system and computer readable storage medium |
CN109213611A (en) * | 2018-08-01 | 2019-01-15 | 天津字节跳动科技有限公司 | The striding course means of communication, device, terminal and storage medium |
CN111984430A (en) * | 2019-05-22 | 2020-11-24 | 厦门雅迅网络股份有限公司 | Many-to-many process communication method and computer readable storage medium |
CN112749023A (en) * | 2019-10-30 | 2021-05-04 | 阿里巴巴集团控股有限公司 | Information processing method, device, equipment and system |
CN111506436A (en) * | 2020-03-25 | 2020-08-07 | 炬星科技(深圳)有限公司 | Method for realizing memory sharing, electronic equipment and shared memory data management library |
CN111651286A (en) * | 2020-05-27 | 2020-09-11 | 泰康保险集团股份有限公司 | Data communication method, device, computing equipment and storage medium |
CN111897666A (en) * | 2020-08-05 | 2020-11-06 | 北京图森未来科技有限公司 | Method, apparatus and system for communication between multiple processes |
CN112506684A (en) * | 2021-02-05 | 2021-03-16 | 全时云商务服务股份有限公司 | Method, system and storage medium for quickly transmitting big data across processes |
CN112906075A (en) * | 2021-03-15 | 2021-06-04 | 北京字节跳动网络技术有限公司 | Memory sharing method and device |
CN112802232A (en) * | 2021-03-22 | 2021-05-14 | 智道网联科技(北京)有限公司 | Video stream data transmission method and related device thereof |
Also Published As
Publication number | Publication date |
---|---|
CN113760578A (en) | 2021-12-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113760578B (en) | Method, device, equipment and computer program for cross-process rapid transmission of big data | |
US8605081B2 (en) | Converting 3D data to hogel data | |
KR100894520B1 (en) | Electronic conference system, electronic conference support method, electronic conference control apparatus, and portable storage device | |
JPH11126196A (en) | Data transfer method and computer system suitable for it | |
CN103414904A (en) | Data compression storage method for medical high-capacity digital imaging and communications in medicine (DICOM) dynamic images | |
CN105227850B (en) | Enabled metadata storage subsystem | |
CN101997900A (en) | Cross-terminal copying and pasting system, device and method | |
CN109890012A (en) | Data transmission method, device, system and storage medium | |
CN111400598A (en) | Information push method, server, multi-port repeater and storage medium | |
CN103838746A (en) | Method for multiple CPU systems to share storage data and systems | |
CN111400213B (en) | Method, device and system for transmitting data | |
JPH1042279A (en) | Device and method for controlling camera | |
WO2024245219A1 (en) | Data processing method, model training method, and related device | |
CN104025026B (en) | Configuration and status register of the access for configuration space | |
CN113934480B (en) | Layer resource configuration method, device and system | |
CN205050186U (en) | Real -time automatic system of booking rooms | |
WO2019037073A1 (en) | Method, device and sever for data synchronization | |
CN114691033B (en) | Data replication method, data storage system control method, device, equipment and medium | |
CN117201518B (en) | Data transmission method, system, device, storage medium and electronic equipment | |
CN117676325B (en) | Control method and related device in multi-camera scene | |
CN114125380B (en) | Target monitoring system and target monitoring method | |
CN115168354B (en) | Integrated processing method and device for event stream of mobile terminal | |
CN102868877A (en) | Real-time image communication system and method | |
CN119363821A (en) | Communication method and device of storage system and computer equipment | |
CN105259846A (en) | Intelligent robot realizing seamless connection among systems |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |