[go: up one dir, main page]

CN109347906B - Data transmission method, device and server - Google Patents

Data transmission method, device and server Download PDF

Info

Publication number
CN109347906B
CN109347906B CN201811008654.3A CN201811008654A CN109347906B CN 109347906 B CN109347906 B CN 109347906B CN 201811008654 A CN201811008654 A CN 201811008654A CN 109347906 B CN109347906 B CN 109347906B
Authority
CN
China
Prior art keywords
log
node
user request
disk
state machine
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811008654.3A
Other languages
Chinese (zh)
Other versions
CN109347906A (en
Inventor
燕皓阳
赵森
苏仙科
曹宝山
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201811008654.3A priority Critical patent/CN109347906B/en
Publication of CN109347906A publication Critical patent/CN109347906A/en
Application granted granted Critical
Publication of CN109347906B publication Critical patent/CN109347906B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1095Replication or mirroring of data, e.g. scheduling or transport for data synchronisation between network nodes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/06Management of faults, events, alarms or notifications
    • H04L41/0654Management of faults, events, alarms or notifications using network fault recovery
    • H04L41/0668Management of faults, events, alarms or notifications using network fault recovery by dynamic selection of recovery network elements, e.g. replacement by the most appropriate element after failure
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/30Decision processes by autonomous network management units using voting and bidding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/10Active monitoring, e.g. heartbeat, ping or trace-route

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Cardiology (AREA)
  • General Health & Medical Sciences (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention provides a data processing method, a data processing device and a server, wherein the method comprises the following steps: receiving a user request sent by a client; recording the user request in a log form, and reading the log after the recording is finished; carrying out strong-consistency network transceiving under an asynchronous remote call service framework of an abstract network layer; the calling state machine engine receives a strong consistency callback to cause the state machine engine to execute the user request.

Description

Data transmission method, device and server
Technical Field
The present invention relates to the field of database technologies, and in particular, to a data processing method, an apparatus, and a server.
Background
The master-slave replication of Redis adopts an asynchronous replication method, and the slave server periodically reports the processing progress of the replication stream to the master server.
Redis is a key-value storage system. Similar to Memcached, it supports relatively more stored value types, including string, list, set, zset, and hash. These data types all support push/pop, add/remove, and intersect union and difference, and richer operations, and these operations are all atomic. On this basis, Redis supports various different ways of ordering. Like memcached, data is cached in memory to ensure efficiency. The difference is that human Redis periodically writes updated data into a disk or writes modification operation into an additional recording file, and Master-slave synchronization is realized on the basis.
Redis supports master-slave synchronization. Data may be synchronized from a master server to any number of slave servers, which may be master servers associated with other slave servers. This enables Redis to perform single-level tree replication. The storage disk can write data intentionally or unintentionally. Due to the fact that the publish/subscribe mechanism is completely achieved, when the trees are synchronized anywhere from the database, one channel can be subscribed and the complete message publishing record of the main server can be received. Synchronization is helpful for scalability of read operations and data redundancy.
However, Redis does not provide a strong consistent synchronization pattern, and cannot cover financial-level application scenarios such as banking, insurance, etc.
Disclosure of Invention
In order to solve technical problems in the prior art, embodiments of the present invention provide a data processing method, an apparatus, a server, and a storage medium. The technical scheme is as follows:
in one aspect, a data processing method is provided, including: receiving a user request sent by a client; recording the user request in a log form, and reading the log after the recording is finished; carrying out strong-consistency network transceiving under an asynchronous remote call service framework of an abstract network layer; the calling state machine engine receives a strong consistency callback to cause the state machine engine to execute the user request.
In one aspect, a data processing apparatus is provided, including: the receiving module is used for receiving a user request sent by a client; the log reading module is used for recording the user request in a log form and reading the log after the recording is finished; the strong consistency transceiving module is used for carrying out strong consistency network transceiving under the asynchronous remote call service framework of the abstract network layer; and the calling module is used for calling the state machine engine to receive the strong consistency callback so as to enable the state machine engine to execute the user request.
In another aspect, a server is provided, which includes the foregoing apparatus.
In another aspect, a storage medium is provided, in which at least one instruction, at least one program, a set of codes, or a set of instructions is stored, which is loaded and executed by a processor to implement the aforementioned data processing method.
The technical scheme provided by the embodiment of the invention has the following beneficial effects: high safety and high reliability of data processing can be guaranteed through strong consistency transceiving, and the data processing process can be used in financial application scenes such as banks and insurance and other scenes needing strict safety levels.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic diagram of a distributed data management unit according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a network architecture using a common strong consistency algorithm and a log library according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a distributed data management unit corresponding to a schematic diagram of a network architecture using a common strong consistency algorithm and a log library according to an embodiment of the present invention;
FIG. 4 is a flow chart illustrating steps of a data processing method according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of an interaction process of a data processing method according to an embodiment of the present invention;
fig. 6 is a schematic diagram of a transmission path of a heartbeat packet according to an embodiment of the present invention;
fig. 7 is a schematic diagram of a heartbeat packet sending interaction process provided in an embodiment of the present invention;
FIG. 8 is a schematic diagram of a node lifecycle provided by an embodiment of the present invention;
FIG. 9 is a functional block diagram of a data processing apparatus according to an embodiment of the present invention;
FIG. 10 is a functional block diagram of a receiving module provided by an embodiment of the invention;
FIG. 11 is a functional block diagram of a log reading module provided by an embodiment of the present invention;
FIG. 12 is a functional block diagram of a strong coherent transceiver module provided by an embodiment of the present invention;
FIG. 13 is a functional block diagram of a master-slave synchronization module provided by an embodiment of the present invention;
FIG. 14 is a functional block diagram of a calling module provided by an embodiment of the present invention;
FIG. 15 is a functional block diagram of a data processing apparatus including a result return module according to an embodiment of the present invention;
FIG. 16 is a functional block diagram of an alternative sub-module of a strong coherent transceiver module provided by an embodiment of the present invention;
fig. 17 is a schematic diagram of a server structure according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described in detail with reference to the accompanying drawings.
In an embodiment of the present invention, as shown in fig. 1, a distributed data management unit is proposed, each data management unit is called a shard (shard), and fig. 1 illustrates a shard situation on one machine. Each service id (bid) includes a duplicate set (duplicate set), and each duplicate set includes a plurality of shards (boards), each shard manages a shared memory block with a certain size, and the underlying engine organizes and manages the memory fast forwarding lines to realize the functions of the interfaces of Redis. The CPU ID of the service to which the fragment belongs can be calculated from the fragment ID (shard). The fragments of each copy set are distributed in different machines in principle, that is, if there are 3 machines A, B, C in the whole copy set at this time, when a Bid is created, one fragment is created at A, B, C, and after the master and backup roles are designated, the user request can be received and served.
According to the above description, the first fragment (shard-0), the second fragment (shard-1), the third fragment (shard-2), and the fourth fragment (shard-3) of different fragments in fig. 1 belong to different services, and the user request is randomly sent to any CPU, and if the service to which the CPU receiving the user request belongs finds itself not to be the target fragment through the query route, the request is sent to the target CPU through the submit function (submit), and is processed by the service on the target CPU. Each fragment also corresponds to a first shared memory (shm-0), a second shared memory (shm-1), a third shared memory (shm-2) and a fourth shared memory (shm-3) of the respective shared memories, and the shared memories are used for storing key contents (key values). The following is the forwarded request code fragment:
Return submit(ctx->CPU(),[this,ctx]){
Process(ctx);
})
in some financial business scenarios, the consistency requirement for each copy set of data is high, and the framework shown in fig. 1 cannot completely maintain the strong consistency characteristic between the copies.
In one embodiment, as shown in FIG. 2, to ensure compatibility, the log copy and consistency algorithm is not implemented directly based on an asynchronous remote call service framework (RPC), but rather an internal common strong consistency algorithm and a log library (library) are employed to implement the basic logic and interface calls in the log copy and consistency algorithm protocol. The common strong consistency algorithm and the log library (library) provide a conventional synchronous interface, and because the asynchronous remote call service framework cannot directly call blocked disk IO and network IO, the difficulty in implementing strong consistency logic mainly lies in the combination of the asynchronous framework and the synchronous interface. Therefore, in the strongly consistent logic, asynchronous disk IO, network IO and a state machine engine based on an asynchronous remote call service framework (RPC) are packaged into a synchronous interface for a public strong consistency algorithm and a log library (library) callback, so that the strongly consistent replication of the master fragment and the slave fragment can be realized.
Fig. 3 shows a distributed data management unit corresponding to fig. 2, and in conjunction with fig. 2 and fig. 3, a strong consistency shard object is added at the same level of shards. The strong consistency fragmentation object can be regarded as a Node (Node) in a strong consistency transmission protocol, has the complete function of a single Node in the strong consistency transmission protocol, and comprises a public strong consistency algorithm and a log library (library) in each strong consistency fragmentation packet, a synchronous interface comprising strong consistency basic logic, disk and network IO logic, state machine engine (FSM) implementation and an interface for public strong consistency algorithm and log library (library) callback. Each Node in the strong consistency transmission protocol has its own role and is in one of the master Node, the candidate Node and the slave Node. The master node receives a read-write request of a user, the slave node only receives the read request, and the slave node sends the request to the master node in an asynchronous remote call service framework (RPC) mode after receiving the write request. After receiving the read request, the master node serializes the request into a log to perform a disk-dropping operation, then reads the dropped log from the disk, synchronizes the log to other slave nodes through a network IO, performs the disk-dropping after the other slave nodes receive the log synchronization request, and notifies the master node when the log has been successfully dropped. And finally, if the master node finds that the log is successfully written into the log file by a plurality of nodes, the master node calls the bottom engine and simultaneously informs the slave node to call the bottom engine, the write operation corresponding to the log is applied, and then the result is returned to the user.
In the above process, the user request first reaches the Node (Node) in the strong consistency transmission protocol, and transfers between the strong consistency nodes, and after confirming that the log reaches the large number of nodes, the bottom layer engine is called.
The request path of a user request in a single strong consistency fragment is shown in fig. 2:
the Client (Client) receives the request from the network and then sends the packet to the strong consistency module;
a Timer (Timer) that periodically drives a common strong consistency module to respond to timeout events;
the Network layer (Network) provides a Network transceiving function of an asynchronous remote call service framework (RPC) and drives the strong consistency driving module;
a state machine engine (FSM) for receiving callbacks from the strong consistency, applying the user request to the actual engine and returning a user result;
a public strong consistency algorithm and a log library (library), which realize synchronous interfaces of the consistency algorithm and a log library protocol and are driven by a remote calling service framework;
and disk read-write logic (DiskLog) for processing log read-write requests from a public strong consistency algorithm and a log library.
The user request and reply process is as shown in fig. 4, the abstract Network layer (Network), the state machine engine (FSM), and the disk read-write logic (DiskLog) in fig. 4 all encapsulate the asynchronous interface of the asynchronous remote call service framework (RPC) into a synchronous interface, and provide the synchronous interface for the public strong consistency algorithm and the log library (library) call, when the asynchronous remote call service framework (RPC) needs to call the synchronous interface of the public strong consistency algorithm and the log library (library), for example, when a user request comes in, a specific interface in the public strong consistency algorithm and the log library (library) needs to be called in the asynchronous remote call service framework (RPC) to receive the user request, for example, the user data receiving interface (RecvUserData). Specifically, fig. 4 depicts the following data processing steps:
step S401, receiving a user request sent by a client.
And the client receives the user request, packages the request and then sends the request to the public strong consistency algorithm module. The public strong consistency algorithm module comprises nodes with three roles: the master node, the slave nodes and the candidate nodes receive the packets from the client.
Step S402, recording the user request in a log form, and reading the user request after the recording is finished;
after receiving the read request, the main node in the common strong consistency algorithm module serializes the request into a log, then sends a log disk-dropping request to the disk read-write logic, and after the disk-dropping is completed, the main node reads the log of the disk-dropping.
Step S403, performing strong-consistency network transceiving under an asynchronous remote call service framework (RPC) of the abstract network layer.
And the public strong consistency algorithm module synchronizes the read logs to other slave nodes through network IO, the other slave nodes perform a log synchronization request and then perform a log falling operation on the request, and if the log falls successfully, the slave nodes inform the master node.
Step S404, a state machine engine (FSM) is invoked to receive a strong consistency callback, so that the state machine engine executes the user request.
When the master node finds that the log is successfully written into the log file by most nodes, the master node calls a state machine engine at the bottom layer, and simultaneously informs the slave node to call the engine at the bottom layer, applies the write operation corresponding to the log, and then returns the result to the user.
In conclusion, the strong consistency can bring high security and high reliability, and ensure that the data processing process can be used in financial application scenes such as banks and insurance, and other scenes requiring strict security level. Fig. 5 depicts the interaction flow of the above steps.
In an asynchronous remote call service framework (RPC), the only place where a synchronous interface can be called is in the RPC framework thread, which would otherwise result in a program abort (coredump). So in the scenario of receiving a user request, an asynchronous remote call service framework (RPC) thread is started, calling a public strong consistency algorithm and a specific interface in the log library (library), e.g. recvursida (). Each asynchronous remote Call service framework (RPC) thread runs on the CPU that created it and does not share a stack, so no locks are required to secure the thread. There are several asynchronous remote call service framework (RPC) threads in the implementation process of strong consistency, no matter the asynchronous remote call service framework (RPC) calls the synchronous interface of the public strong consistency algorithm and the log library (library) or the call-back of the public strong consistency algorithm and the log library (library) is performed in the asynchronous strong consistency implementation process. For example, the future interface may be encapsulated as a synchronous interface to satisfy the synchronous encapsulation of asynchronous interfaces:
Future<>DisklogImp_::DisklogImpl_::AppendEntry(const Entry&entry)
Return gate(_gate,[this,entry]){
Return lock(_lock,for_write(),[this,entry]){
Void DisklogImp_::DisklogImpl_::AppendEntry(const Entry&entry){
_impl.AppendEntry(entry).get();
}
this is accomplished by encapsulating an asynchronous method into a synchronous interface where the get () function waits for the future returned by the asynchronous AppendEntry () to become complete. When the future state becomes complete, the other RPC frame threads on the CPU can continue to execute. Therefore, through the steps, the interface of the asynchronous execution framework can be synchronously encapsulated.
In a strongly consistent public library object (library), a server is given three roles: master nodes, slave nodes and candidate nodes. Of the three nodes, the master node is in the core position and is used for receiving a request from a client, the client does not send the request to a slave node, even if the client sends the request to the slave node, the slave node rejects the request of the client and returns the IP address of the master node to the client. Data between the master node and the slave nodes flow in a unidirectional mode, the data flow from the master node to the slave nodes in the form of heartbeat packets, and the heartbeat packets contain log files. The slave nodes receive the heartbeat packet from the master node, if one slave node does not receive the heartbeat packet of the master node within a specified time, the slave node is changed into a candidate node, a voting request is sent to other slave nodes, and when most slave nodes vote for the candidate node, the candidate node is changed into the master node. And as the candidate node, the heartbeat packet sent by the main node is not accepted.
Under the drive of a Timer (Timer), the master node sends heartbeat packets to each slave node, when no log needing synchronization exists, the content of the heartbeat packets is empty, otherwise, the log needing synchronization is carried over. The master node can judge the overtime condition of the slave node to perform some disaster recovery operations, the slave node can judge whether the master node is overtime according to the interval of receiving the heartbeat packet, and if the master node is overtime, the slave node becomes a candidate node to initiate election. The heartbeat packet transmission path is shown in fig. 6:
step S601, the main node sends a heartbeat packet to the slave node; when no log needing synchronization exists, the content of the heartbeat packet is null; when the log needs to be synchronized, the log that needs to be synchronized is brought on the tape.
Step S602, the master node determines the timeout condition of the slave node to perform disaster recovery operation.
Step S603, the slave node judges whether the master node is overtime according to the interval of receiving the heartbeat packet; when the master node is used, the slave nodes become candidate nodes to initiate election application.
Fig. 7 shows an interactive process of heartbeat packet transmission.
In a specific embodiment, the timer of the slave node periodically calls a run-time module (run _ period), which checks the time of the last heartbeat sent by the master node, and if the heartbeat of the master node is not received for a long time (exceeding a time threshold), the slave node changes itself into a candidate node and initiates election. And the candidate node recalls the persistent election meta-information in the local disk logic through the current write-in deadline module and the current voting factor module. The candidate node can call back the disk logic by reading the last directory information module and the read deadline module to obtain the deadline and the index of the candidate node. The time limit and the index are used for the election request, and other nodes decide whether to approve the voting according to the time limit and the index of the election node in the election process. And the candidate node initiates election to other nodes in the cluster by sending a command callback abstract network layer, and receives a return packet of the election request by the candidate node through a receiving function callback abstract network. As shown in fig. 8, the candidate node is time-bounded, and this implementation constraint is reflected in the deadline of the candidate node itself. The life cycle of the candidate node includes an election period, a split voting period, and a general operation period. In the election period, the candidate node initiates a ticket invitation request to the slave node; during the split vote, the candidate node receives votes from the slave node; and in addition, if two candidate nodes initiate voting at the same time, which candidate node is used as the master node is realized by competition of the two candidate nodes. After two candidate nodes wait for a period of timeout, such as 300ms, because the votes obtained by the two candidates are the same, after 300ms, the two candidates send out the invitation tickets, and the probability of obtaining the tickets at the same time is greatly reduced, so that the candidate which sends out the invitation ticket firstly receives most of the agreement and becomes the master node, and when the other candidate node sends out the invitation ticket later, the slave node already votes to the first candidate node and cannot vote to the first candidate node any more, the first candidate node becomes the voter, and finally the voter also becomes a member of the common slave node.
In one possible embodiment, considering the following scenario in terms of a USER DATA reception interface, the server sends the client the DATA "_ META _ DATA \ r \ n _ USER _ DATA", requiring that the previous DATA \ r \ n _ META _ DATA _ is received in the first request and the remaining requests invoke reading the DATA of the _ USER _ DATA _ portion. Because the tcp protocol is a streaming protocol (Stream) and the _ META _ DATA is not of fixed length, there is no way to guarantee that a request call will not read the DATA of the _ USER _ DATA _ portion unless one character is read at a time. At this time, the data peak parameter (MSG _ PEEK) in the request function may be considered, and the prototype of the request is ssize _ t Recv (int s, void × buf, size _ t len, int flags); normally, flags are set to 0, and the request function reads the data in the tcp cache into the data cache and removes the read data from the tcp cache. The flags are set to the data peak parameter (MSG _ PEEK), only the data in the tcp cache is read into the cache, the read data is not removed from the tcp cache, and the data which is just read can still be read by calling Recv again. For the above scenario, Recv (fd, buf, nbuf, MSG _ PEEK) looks at the data, looks at the position pos of "\\ r \ n", and then Recv (fd, buf, pos +2,0) reads (and removes) the data. Of course, this is extremely extreme, and many times even if the DATA of the _ USER _ DATA _ portion is read in a Recv, the DATA of the portion can be stored first and added to the DATA of the subsequent Recv, but additional complexity is brought to storing the DATA if different recvs span many functions.
In a possible embodiment, another scenario may be considered, where the same port supports a text protocol and a binary protocol, and the data peak parameter (MSG _ PEEK) is used to look at the first few characters, to determine whether the protocol is a text protocol or a binary protocol, and then to make a request distribution, which is also a good choice. Of course, the use of the data peak parameter (MSG _ PEEK) for a real Recv previous call results in an additional one-time function call.
In one possible embodiment, in an asynchronous remote call service framework (RPC), the only place where a synchronous interface can be called is in the thread of the framework, otherwise it would cause a program abort (coredump). Each asynchronous remote Call service framework (RPC) thread runs on the CPU that created it and does not share a stack, so no locks are required to secure the thread. Since the stack is not shared between the threads in the asynchronous remote call service framework (RPC), a channel-like relationship is formed between the threads. In the normal mode, if the loop function (loop) is executed twice in series, two consecutive outputs are generated, for example:
func loop(){
for i:=0;i<10;i++{
fmt.Printf("%d",i)
}}
func main(){
loop()
loop()
}
then the result of the execution is 01234567890123456789.
Whereas in the asynchronous remote call service framework (RPC), if the loop function (loop) is executed twice in series, the result 0123456789 is output only once. This is because in the asynchronous framework, the second loop has not yet been executed and the main function has exited. Therefore, in the asynchronous framework, in order to prevent the main function from exiting too early, a waiting method is adopted, that is, the waiting time is added in the loop () function:
func main(){
loop();
loop();
time.sleep(time.second);
}
however, the way of increasing the latency will certainly increase the overall execution time consumption, so a notification function can be added between threads to add a block for another thread:
for thread in threads;
thread.join();
the removal of the blocking is only informed after the channel execution is completed, which may lead to a possibility of deadlock. Thus, having an asynchronous remote call service framework (RPC) as the only place where a synchronous interface can be called reduces the potential for aborts due to other calls.
In one embodiment of the present invention, as shown in fig. 9, there is provided a functional block diagram of a data processing apparatus, the apparatus including: the receiving module is used for receiving a user request sent by a client; the log reading module is used for recording the user request in a log form and reading the log after the recording is finished; the strong consistency transceiving module is used for carrying out strong consistency network transceiving under an asynchronous remote call service framework (RPC) of an abstract network layer; a calling module for calling a state machine engine (FSM) to receive a strong consistency callback so that the state machine engine executes the user request.
In an alternative embodiment, as shown in fig. 10, a schematic block diagram of a receiving module is provided, comprising: the receiving submodule is used for receiving the user request from the client when the main node receives the user request from the client; and the rejection submodule is used for rejecting the request when the slave node receives the request from the client and returning the network address of the master node to the client.
In an alternative embodiment, as shown in fig. 11, a schematic block diagram of a log reading module is provided, comprising: the user request receiving submodule is used for receiving a user request; a serialization submodule for serializing the user request into a request log; the disk drop request submodule is used for sending a log disk drop request to the disk read-write logic; and the reading sub-module is used for receiving information returned by the disk reading and writing logic and reading the disk-dropping log when the disk dropping is finished.
In an alternative embodiment, as shown in fig. 12, a schematic block diagram of a strongly coherent transceiver module is provided, comprising: the master-slave synchronization submodule is used for synchronizing the read logs to the slave nodes by the master node; and the disk-dropping notification submodule is used for carrying out disk-dropping operation on the log by the slave node and notifying the master node when the disk is successfully dropped.
In an alternative embodiment, as shown in fig. 13, a schematic block diagram of a master-slave synchronization submodule is provided, comprising: the heartbeat packet sending submodule is used for sending a heartbeat packet to the slave node by the master node; when no log needing synchronization exists, the content of the heartbeat packet is null; when the logs need to be synchronized, taking the logs needing to be synchronized; the main node timeout judging submodule is used for judging the timeout condition of the slave node by the main node so as to carry out disaster recovery operation; the slave node overtime judgment submodule judges whether the master node is overtime or not according to the interval of receiving the heartbeat packet; when the master node is overtime, the slave node becomes a candidate node, and an election application is initiated.
In an alternative embodiment, as shown in FIG. 14, a functional block diagram of a calling module is provided, comprising: the calling submodule is used for calling a state machine engine (FSM) when the master node receives a log file successfully written notification from most slave nodes, and notifying the state machine engine (FSM) to write the log; and the result returning submodule is used for returning the result to the client.
In an alternative embodiment, as shown in FIG. 15, a functional block diagram of a data processing apparatus is provided that includes a result return module: and the result returning module is used for returning an execution result to the client after the state machine engine executes the user request.
In an alternative embodiment, shown in fig. 16, other optional sub-modules providing a strong coherent transceiver module include: the thread waiting module is used for waiting for the parameter state returned by a single thread in the strong-consistency network transceiving process under the asynchronous remote call service framework (RPC) of the abstract network layer; the parameter state monitoring module is used for monitoring the parameter state; and the synchronous packaging module is used for continuously executing other threads of the asynchronous remote call service framework (RPC) after the parameter state is changed to be finished. Through the above process, the interface in the asynchronous framework can be packaged into synchronization
Referring to fig. 17, a schematic structural diagram of a server according to an embodiment of the present invention is shown. The server is used for implementing the data processing method on the server side provided in the above embodiment. Specifically, the method comprises the following steps:
the server 1200 includes a Central Processing Unit (CPU)1201, a system memory 1204 including a Random Access Memory (RAM)1202 and a Read Only Memory (ROM)1203, and a system bus 1205 connecting the system memory 1204 and the central processing unit 1201. The server 1200 also includes a basic input/output system (I/O system) 1206 to facilitate transfer of information between devices within the computer, and a mass storage device 1207 for storing an operating system 1213, application programs 1214, and other program modules 1215.
The basic input/output system 1206 includes a display 1208 for displaying information and an input device 1209, such as a mouse, keyboard, etc., for a user to input information. Wherein the display 1208 and input device 1209 are connected to the central processing unit 1201 through an input-output controller 1210 coupled to the system bus 1205. The basic input/output system 1206 may also include an input/output controller 1210 for receiving and processing input from a number of other devices, such as a keyboard, mouse, or electronic stylus. Similarly, input-output controller 1210 also provides output to a display screen, a printer, or other type of output device.
The mass storage device 1207 is connected to the central processing unit 1201 through a mass storage controller (not shown) connected to the system bus 1205. The mass storage device 1207 and its associated computer-readable media provide non-volatile storage for the server 1200. That is, the mass storage device 1207 may include a computer-readable medium (not shown) such as a hard disk or CD-ROM drive.
Without loss of generality, the computer-readable media may comprise computer storage media and communication media. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes RAM, ROM, EPROM, EEPROM, flash memory or other solid state memory technology, CD-ROM, DVD, or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices. Of course, those skilled in the art will appreciate that the computer storage media is not limited to the foregoing. The system memory 1204 and mass storage device 1207 described above may be collectively referred to as memory.
The server 1200 may also operate as a remote computer connected to a network via a network, such as the internet, in accordance with various embodiments of the present invention. That is, the server 1200 may be connected to the network 1212 through a network interface unit 1211 coupled to the system bus 1205, or the network interface unit 1211 may be used to connect to other types of networks or remote computer systems (not shown).
The memory also includes one or more programs stored in the memory and configured to be executed by one or more processors. The one or more programs include instructions for performing the method of the backend server side.
In an exemplary embodiment, a non-transitory computer readable storage medium is further provided, for example, a memory including instructions executable by a processor of a terminal to perform the steps of the sender client side or the receiver client side in the above method embodiments, or executed by a processor of a server to perform the steps of the background server side in the above method embodiments. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
It should be understood that reference to "a plurality" herein means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (11)

1. A data processing method is characterized in that the method is applied to a public strong consistency algorithm module, the public strong consistency algorithm module comprises a main node and a slave node, and the method comprises the following steps:
receiving a user request sent by a client;
recording the user request in a log form, and reading the log after the recording is finished;
asynchronous realization of an asynchronous disk IO, a network IO and a state machine engine based on an asynchronous remote call service framework is encapsulated into a synchronous interface;
under an asynchronous remote call service framework (RPC) of an abstract network layer, strong-consistency network transceiving is carried out by calling back the synchronous interface;
when the master node receives successful disk drop notifications for most slave nodes, a calling state machine engine (FSM) receives a strong consistency callback to cause the state machine engine to execute the user request.
2. The method of claim 1, wherein receiving the user request sent by the client comprises: and receiving a user request from the client by the main node, rejecting the request when the slave node receives the request from the client, and returning the network address of the main node to the client.
3. The method of claim 1, wherein logging the user request and reading the log after logging is complete comprises:
receiving a user request;
serializing the user request into a request log;
sending a log disk-dropping request to disk read-write logic;
and receiving information returned by the disk read-write logic, and reading the disk drop log when the disk drop is completed.
4. The method of claim 1, wherein performing strongly consistent network transception under asynchronous remote call service framework (RPC) at abstract network layer comprises:
the master node synchronizes the read logs to the slave nodes;
and the slave node performs a disk-dropping operation on the log and informs the master node when the disk-dropping operation is successful.
5. The method of claim 4, wherein the common strong consistency algorithm module further comprises a candidate node, and wherein the step of the master node synchronizing the read logs to the slave nodes comprises:
the master node sends a heartbeat packet to the slave node; when no log needing synchronization exists, the content of the heartbeat packet is null; when the logs need to be synchronized, taking the logs needing to be synchronized;
the master node judges the overtime condition of the slave node to carry out disaster recovery operation;
the slave node judges whether the master node is overtime or not according to the interval of receiving the heartbeat packet; when the master node is overtime, the slave node becomes a candidate node, and an election application is initiated.
6. The method of claim 1, wherein the invoking state machine engine (FSM) receiving a strong consistency callback to cause the state machine engine to execute the user request comprises:
when the master node receives notification of successful writing of log files of most slave nodes, a state machine engine (FSM) is called and is notified to write the log, and then the result is returned to the client.
7. The method of claim 1, wherein the state machine engine returns the execution result to the client after executing the user request.
8. The method of claim 1, wherein during strong-consistency network transceiving under an asynchronous remote call service framework (RPC) of the abstract network layer, parameter status returned by a single thread is waited, and when the parameter status becomes complete, execution of other threads of the asynchronous remote call service framework (RPC) is continued.
9. A data processing device is applied to a public strong consistency algorithm module, wherein the public strong consistency algorithm module comprises a main node and a slave node, and the method comprises the following steps:
the receiving module is used for receiving a user request sent by a client;
the log reading module is used for recording the user request in a log form and reading the log after the recording is finished;
the packaging module is used for packaging asynchronous realization of an asynchronous disk IO, a network IO and a state machine engine based on an asynchronous remote call service framework into a synchronous interface;
the strong consistency transceiving module is used for carrying out strong consistency network transceiving by calling back the synchronous interface under an asynchronous remote call service framework (RPC) of an abstract network layer;
and the calling module is used for calling a state machine engine (FSM) to receive a strong consistency callback when the master node receives successful disk-down notifications of most slave nodes, so that the state machine engine executes the user request.
10. A server, characterized in that it comprises the device of claim 9.
11. A storage medium having stored therein at least one instruction, at least one program, set of codes, or set of instructions, which is loaded and executed by a processor to implement the data processing method of any one of claims 1 to 8.
CN201811008654.3A 2018-08-30 2018-08-30 Data transmission method, device and server Active CN109347906B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811008654.3A CN109347906B (en) 2018-08-30 2018-08-30 Data transmission method, device and server

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811008654.3A CN109347906B (en) 2018-08-30 2018-08-30 Data transmission method, device and server

Publications (2)

Publication Number Publication Date
CN109347906A CN109347906A (en) 2019-02-15
CN109347906B true CN109347906B (en) 2021-04-20

Family

ID=65296682

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811008654.3A Active CN109347906B (en) 2018-08-30 2018-08-30 Data transmission method, device and server

Country Status (1)

Country Link
CN (1) CN109347906B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110222118B (en) * 2019-05-23 2022-04-05 上海易点时空网络有限公司 Asynchronous data processing method and device based on queue
CN112671601B (en) * 2020-12-11 2023-10-31 航天信息股份有限公司 Interface monitoring system and method based on Zookeeper
CN112769824B (en) * 2021-01-07 2023-03-07 深圳市大富网络技术有限公司 Information transmission state updating method, terminal, device and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103036961A (en) * 2012-12-07 2013-04-10 蓝盾信息安全技术股份有限公司 Distributed collection and storage method of journal
CN105122727A (en) * 2013-01-11 2015-12-02 Db网络公司 Systems and methods for detecting and mitigating threats to a structured data storage system
CN105512266A (en) * 2015-12-03 2016-04-20 曙光信息产业(北京)有限公司 Method and device for achieving operational consistency of distributed database
CN106201739A (en) * 2016-06-29 2016-12-07 上海浦东发展银行股份有限公司信用卡中心 A kind of remote invocation method of Storm based on Redis
CN106789095A (en) * 2017-03-30 2017-05-31 腾讯科技(深圳)有限公司 Distributed system and message treatment method

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8572031B2 (en) * 2010-12-23 2013-10-29 Mongodb, Inc. Method and apparatus for maintaining replica sets
US9141681B2 (en) * 2012-11-29 2015-09-22 Red Hat, Inc. Creating a column family in a database
CN104618127B (en) * 2013-11-01 2019-01-29 深圳市腾讯计算机系统有限公司 Active and standby memory node switching method and system
CN104283956B (en) * 2014-09-30 2016-01-20 腾讯科技(深圳)有限公司 Strong consistency distributed data storage method, Apparatus and system
US10126980B2 (en) * 2015-04-29 2018-11-13 International Business Machines Corporation Managing data operations in a quorum-based data replication system
CN106301938A (en) * 2016-08-25 2017-01-04 成都索贝数码科技股份有限公司 A kind of high availability and the data base cluster system of strong consistency and node administration method thereof

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103036961A (en) * 2012-12-07 2013-04-10 蓝盾信息安全技术股份有限公司 Distributed collection and storage method of journal
CN105122727A (en) * 2013-01-11 2015-12-02 Db网络公司 Systems and methods for detecting and mitigating threats to a structured data storage system
CN105512266A (en) * 2015-12-03 2016-04-20 曙光信息产业(北京)有限公司 Method and device for achieving operational consistency of distributed database
CN106201739A (en) * 2016-06-29 2016-12-07 上海浦东发展银行股份有限公司信用卡中心 A kind of remote invocation method of Storm based on Redis
CN106789095A (en) * 2017-03-30 2017-05-31 腾讯科技(深圳)有限公司 Distributed system and message treatment method

Also Published As

Publication number Publication date
CN109347906A (en) 2019-02-15

Similar Documents

Publication Publication Date Title
US9632828B1 (en) Computing and tracking client staleness using transaction responses
US6990606B2 (en) Cascading failover of a data management application for shared disk file systems in loosely coupled node clusters
EP4332870A1 (en) Transaction data processing method and apparatus, computer device and storage medium
US20030187927A1 (en) Clustering infrastructure system and method
CN111480157A (en) System and method for adding nodes in a blockchain network
US20230370285A1 (en) Block-chain-based data processing method, computer device, computer-readable storage medium
CN111881116A (en) Data migration method, data migration system, computer system, and storage medium
US11526493B2 (en) Generalized reversibility framework for common knowledge in scale-out database systems
CN109347906B (en) Data transmission method, device and server
CN102724304A (en) Information warehouse federation in subscription/release system and data synchronization method
CN115098229A (en) Transaction processing method, device, node device and storage medium
CN112527759B (en) Log execution method and device, computer equipment and storage medium
CN112925614B (en) Distributed transaction processing method, device, medium and equipment
CN114448983A (en) ZooKeeper-based distributed data exchange method
CN103064898A (en) Business locking and unlocking method and device
US11522966B2 (en) Methods, devices and systems for non-disruptive upgrades to a replicated state machine in a distributed computing environment
US20040236990A1 (en) Transaction branch management to ensure maximum branch completion in the face of failure
CN114896258B (en) Transaction data synchronization method and device, computer equipment and storage medium
CN112463887A (en) Data processing method, device, equipment and storage medium
US20240205032A1 (en) Blockchain data processing method, apparatus, and device, computer-readable storage medium, and computer program product
WO2024108348A1 (en) Method and system for eventual consistency of data types in geo-distributed active-active database systems
EP4287021A1 (en) Request processing method and apparatus, computing device and storage medium
CN117931531B (en) Data backup system, method, apparatus, device, storage medium and program product
Frolund et al. Building storage registers from crash-recovery processes
Duranti Microservice Oriented Pipeline Architectures

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant