[go: up one dir, main page]

CN116737724A - Cache data full synchronization method and device - Google Patents

Cache data full synchronization method and device Download PDF

Info

Publication number
CN116737724A
CN116737724A CN202310720377.3A CN202310720377A CN116737724A CN 116737724 A CN116737724 A CN 116737724A CN 202310720377 A CN202310720377 A CN 202310720377A CN 116737724 A CN116737724 A CN 116737724A
Authority
CN
China
Prior art keywords
data
cache
slot
full synchronization
synchronization
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310720377.3A
Other languages
Chinese (zh)
Inventor
武文斌
黄海鹏
傅兵
李晓萌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Industrial and Commercial Bank of China Ltd ICBC
Original Assignee
Industrial and Commercial Bank of China Ltd ICBC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Industrial and Commercial Bank of China Ltd ICBC filed Critical Industrial and Commercial Bank of China Ltd ICBC
Priority to CN202310720377.3A priority Critical patent/CN116737724A/en
Publication of CN116737724A publication Critical patent/CN116737724A/en
Pending legal-status Critical Current

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The disclosure provides a cache data full synchronization method, relates to the technical field of cloud computing, and can be applied to the technical field of finance. The method is applied to a cache database master node, the master node stores data by using a double-hash architecture, the double-hash architecture comprises a master hash table and a cache hash table, the cache hash table is used for storing temporary incremental data, and the method comprises the following steps: responding to a master-slave node full synchronization instruction, starting a full synchronization thread and setting a full synchronization flag bit for data synchronization, wherein the full synchronization flag bit is used for representing the current full synchronization state of a master node; determining the operation types of the target slot and the client in response to the operation instruction of the client; and performing data read-write operation according to the zone bit of the target slot, the full synchronization zone bit and the operation type of the client, wherein the zone bit of the slot is used for representing the current operation type of the slot. The disclosure also provides a cache data full synchronization device, equipment, a storage medium and a program product.

Description

Cache data full synchronization method and device
Technical Field
The present disclosure relates to the field of cloud computing technologies, and in particular, to the field of data synchronization technologies, and more particularly, to a method, an apparatus, a device, a storage medium, and a program product for cache data full synchronization.
Background
REDIS (Remote Dictionary Server, REDIS) is a high-performance key-value type memory database, has the characteristics of high performance, rich data types, good expandability and the like, and is widely applied to the field of Internet. To ensure high availability, single point failures are avoided, two nodes are typically started and master-slave mode is started, and data synchronization is maintained. When the master node and the slave node are started for the first time or the network is disconnected for a long time, the master node and the slave node can perform full-volume synchronization, and the master node needs a fork subprocess for generating RDB (Redis Database) files and then pushing full-volume data to the slave node.
However, redis will be blocked briefly at the moment of invoking fork, which takes several milliseconds to hundreds of milliseconds according to the difference of storage capacity, and has a great influence on delay-sensitive application. Meanwhile, during the RDB file generation period of fork, if the client side write operation is more frequent, the COW mechanism can copy two copies, and occupies larger memory space, so that the memory utilization rate is lower, and the resource waste is serious.
It should be noted that the information disclosed in the above background section is only for enhancing understanding of the background of the present disclosure and thus may include information that does not constitute prior art known to those of ordinary skill in the art.
Disclosure of Invention
In view of the foregoing, the present disclosure provides a cache data full synchronization method, apparatus, device, storage medium, and program product.
According to a first aspect of the present disclosure, there is provided a cache data full synchronization method applied to a cache database master node, the master node storing data using a double hash architecture, the double hash architecture including a master hash table and a cache hash table, the cache hash table being used for storing temporary incremental data, the method comprising:
responding to a master-slave node full synchronization instruction, starting a full synchronization thread and setting a full synchronization flag bit for data synchronization, wherein the full synchronization flag bit is used for representing the current full synchronization state of a master node;
determining a target slot and a client operation type in response to an operation instruction of the client; and
and performing data read-write operation according to the zone bit of the target slot, the full synchronization zone bit and the client operation type, wherein the zone bit of the slot is used for representing the current operation type of the slot.
According to an embodiment of the present disclosure, the performing data read-write operations according to the flag bit of the target slot, the full synchronization flag bit, and the client operation type includes:
When the operation type of the client is determined to be read operation, returning a result according to the data of the target slot in the cache Ha Xibiao table;
and when the operation type of the client is determined to be writing operation and the full-quantity synchronization zone bit is in full-quantity synchronization, carrying out data writing operation according to the zone bit of the target slot.
According to an embodiment of the present disclosure, the performing a data writing operation according to the flag bit of the target slot includes:
if the target slot is determined to be performing full synchronization operation according to the flag bit of the target slot, updating the flag bit of the target slot;
when the flag bit of the target slot is successfully updated, writing the client data into the target slot of the main hash table;
when the flag bit of the target slot is determined to be failed to be updated, determining blocking waiting time; and
and when the blocking waiting time is greater than a preset threshold value, writing the client data into a target slot in a cache hash table.
According to an embodiment of the present disclosure, the data return result according to the target slot in the cache Ha Xibiao table includes:
if the data of the target slot in the cache Ha Xibiao table is determined to be empty, the data of the target slot in the main hash table is read; and
And if the data of the target slot in the cache Ha Xibiao table is determined not to be empty, reading the data of the target slot in the cache hash table.
According to an embodiment of the present disclosure, after performing data read-write operations according to the flag bit of the target slot, the full synchronization flag bit, and the client operation type, the method further includes:
and merging the data in the cache hash table into the main hash table.
According to an embodiment of the present disclosure, the merging the data in the cache hash table into the master hash table includes:
when determining that data exists in any slot in the cache hash table, changing the flag bit of any slot; and
and moving the data in the cache hash table to a main hash table.
According to an embodiment of the present disclosure, the starting the full synchronization thread and setting the full synchronization flag bit for data synchronization includes:
updating the full synchronization zone bit and the zone bit of the slot to be synchronized;
executing data packing operation on the slots to be synchronized; and
and after the execution is completed, recovering the flag bit of the slot to be synchronized.
A second aspect of the present disclosure provides a buffered data full synchronization device applied to a buffered database master node, the master node storing data using a double hash architecture, the double hash architecture including a master hash table and a buffered hash table, the buffered hash table being used to store temporary incremental data, the device comprising:
The data synchronization module is used for responding to a master-slave node full synchronization instruction, starting a full synchronization thread and setting a full synchronization zone bit to perform data synchronization, wherein the full synchronization zone bit is used for representing the current full synchronization state of the master node; and
the determining module is used for responding to the operation instruction of the client and determining the operation type of the target slot and the client; and
and the data operation module is used for performing data read-write operation according to the zone bit of the target slot, the full synchronization zone bit and the client operation type, wherein the zone bit of the slot is used for representing the current operation type of the slot.
According to an embodiment of the present disclosure, the data operation module includes: a first determination sub-module and a second determination sub-module.
The first determining submodule is used for returning a result according to the data of the target slot in the cache Ha Xibiao table when the operation type of the client is determined to be read operation;
and the second determining submodule is used for performing data writing operation according to the flag bit of the target slot when the client operation type is determined to be writing operation and the full synchronization flag bit is determined to be full synchronization.
According to an embodiment of the present disclosure, the second determination submodule includes: an updating unit, a first determining unit, a second determining unit and a writing unit,
the updating unit is used for updating the zone bit of the target slot if the target slot is determined to be subjected to full synchronization operation according to the zone bit of the target slot;
the first determining unit is used for writing the client data into the target slot of the main hash table when the flag bit of the target slot is successfully updated;
a second determining unit, configured to determine blocking waiting time when determining that updating of the flag bit of the target slot fails; and
and the writing unit is used for writing the client data into the target slot in the cache hash table when the blocking waiting time is larger than a preset threshold value.
According to an embodiment of the present disclosure, the first determination submodule includes: a third determination unit and a fourth determination unit.
A third determining unit, configured to read the data of the target slot in the main hash table if it is determined that the data of the target slot in the cache Ha Xibiao table is empty; and
and a fourth determining unit, configured to read the data of the target slot in the cache hash table if it is determined that the data of the target slot in the cache Ha Xibiao table is not empty.
According to an embodiment of the present disclosure, further comprising: and a data merging module.
And the data merging module is used for merging the data in the cache hash table into the main hash table.
According to an embodiment of the present disclosure, the data merging module includes: a first update sub-module and a data merge sub-module,
the first updating sub-module is used for changing the zone bit of any slot when determining that data exists in any slot in the cache hash table; and
and the data merging sub-module is used for moving the data in the cache hash table to the main hash table.
According to an embodiment of the present disclosure, a data synchronization module includes: the system comprises a second updating sub-module, a data packing module and a third updating sub-module.
The first updating sub-module is used for updating the full synchronization zone bit and the zone bit of the slot to be synchronized;
the data packaging module is used for executing data packaging operation on the slots to be synchronized; and
and the second updating sub-module is used for recovering the zone bit of the slot to be synchronized after the execution is completed.
A third aspect of the present disclosure provides an electronic device, comprising: one or more processors; and a memory for storing one or more programs, wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to perform the cache data throughput synchronization method described above.
A fourth aspect of the present disclosure also provides a computer-readable storage medium having stored thereon executable instructions that, when executed by a processor, cause the processor to perform the above-described cache data full synchronization method.
A fifth aspect of the present disclosure also provides a computer program product comprising a computer program which, when executed by a processor, implements the cache data full synchronization method described above.
According to the cache data full synchronization method provided by the embodiment of the disclosure, full synchronization is performed through multiple threads, when a master-slave node full synchronization instruction is received, a full synchronization thread is started to perform data synchronization, a main thread is used for processing an operation request of a client, data is stored by using a double hash architecture, a main hash table is used for normally storing the data, and a cache hash table is used for storing temporary increment data when two threads collide. And after receiving the operation instruction of the client, determining the operation types of the target slot and the client, and performing data operation according to the zone bit of the target slot, the full synchronization zone bit and the operation types of the client. Compared with the related art, the cache data full-volume synchronization method provided by the embodiment of the disclosure improves and upgrades the Redis master-slave full-volume synchronization mechanism, avoids processing delay caused by short blocking of a fork sub-process during full-volume synchronization, and extra memory consumption caused by a COW mechanism during RDB generation, improves the utilization rate of physical memory, and avoids resource waste.
Drawings
The foregoing and other objects, features and advantages of the disclosure will be more apparent from the following description of embodiments of the disclosure with reference to the accompanying drawings, in which:
FIG. 1 schematically illustrates an application scenario diagram of a cache data full synchronization method, apparatus, device, storage medium and program product according to an embodiment of the present disclosure;
FIG. 2a schematically illustrates a schematic diagram of a dual hash architecture provided in accordance with an embodiment of the present disclosure;
FIG. 2b schematically illustrates an architecture diagram of a buffered data full synchronization apparatus provided according to an embodiment of the present disclosure;
FIG. 3 schematically illustrates a flow chart of a method for cache data full synchronization provided in accordance with an embodiment of the present disclosure;
FIG. 4 schematically illustrates a flow chart of a method for data synchronization by a full-scale synchronization thread provided in accordance with another embodiment of the present disclosure;
FIG. 5 schematically illustrates a flowchart of a method for performing data read and write operations according to the flag bit of the target slot, the full synchronization flag bit, and the client operation type according to another embodiment of the present disclosure;
FIG. 6 schematically illustrates a flow chart of a method for data synchronization when a client operation type is a read operation, provided in accordance with another embodiment of the present disclosure;
FIG. 7 schematically illustrates a flow chart of a method of data synchronization when a client operation type is a write operation, provided in accordance with another embodiment of the present disclosure;
FIG. 8a schematically illustrates one of the flowcharts of a data merge method provided in accordance with yet another embodiment of the present disclosure;
FIG. 8b schematically illustrates a second flowchart of a data merge method provided in accordance with yet another embodiment of the present disclosure;
FIG. 9 schematically illustrates a block diagram of a cache data throughput synchronization apparatus according to an embodiment of the disclosure; and
fig. 10 schematically illustrates a block diagram of an electronic device adapted to implement a buffered data full synchronization method according to an embodiment of the present disclosure.
Detailed Description
Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. It should be understood that the description is only exemplary and is not intended to limit the scope of the present disclosure. In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the present disclosure. It may be evident, however, that one or more embodiments may be practiced without these specific details. In addition, in the following description, descriptions of well-known structures and techniques are omitted so as not to unnecessarily obscure the concepts of the present disclosure.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The terms "comprises," "comprising," and/or the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It should be noted that the terms used herein should be construed to have meanings consistent with the context of the present specification and should not be construed in an idealized or overly formal manner.
Where expressions like at least one of "A, B and C, etc. are used, the expressions should generally be interpreted in accordance with the meaning as commonly understood by those skilled in the art (e.g.," a system having at least one of A, B and C "shall include, but not be limited to, a system having a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.).
The terms appearing in the embodiments of the present disclosure will first be explained:
REDIS: remote Dictionary Server remote dictionary service, an open source is written in C language, and a client SDK in multiple languages is provided based on a sustainable key-Value database of a memory.
RDB: and storing the full memory snapshot of the Redis Database at a certain moment in the disk in a binary format.
Fork: is a Linux system call, and generates a new child process, which replicates the data and stack space of the parent process and inherits the code, environment variables, working directory, resource limitations, etc. of the parent process.
COW: copy-on-Write, after a process calls for fork (), the parent and child processes share a memory page instead of immediately copying the memory page into two copies, and only when a certain process tries to modify data, an exception is triggered, the operating system kernel does a real Copy action, and then the parent and child processes each independently share one Copy of data.
CAS: compare and Swap, an atomic operation, which guarantees data consistency without locking, specifically operates by comparing old values first, replacing them with new values if they meet expectations, belonging to an optimistic lock strategy, and not being blocked and suspended even if the update fails.
The expression CAS (V, E, N), which indicates that for variable V, if the current value is E, then it is updated to N, and true is returned; otherwise, nothing is done, and the false is directly returned.
Redis is a high-performance key-value type memory database, has the characteristics of high performance, rich data types, good expandability and the like, and is widely applied to the field of Internet. To ensure high availability, single point failures are avoided, two nodes are typically started and master-slave mode is started, and data synchronization is maintained. When the master node and the slave node are started for the first time or disconnected for a long time, the master node and the slave nodes can perform full-volume synchronization, and the master node needs a fork subprocess for generating RDB files and then pushes full-volume data to the slave nodes.
However, the fork mechanism has the following problems:
1) fork blocking
Redis can be blocked briefly at the moment of invoking fork, and time is from a few milliseconds to hundreds of milliseconds according to different storage capacities, so that the Redis has a great influence on delay-sensitive application.
2) Memory consumption by COW mechanism
During the fork generating RDB file, if the client write operation is more frequent, the COW mechanism copies two copies, which occupies twice the memory space. In the worst case, all memory spaces of the parent-child process are copied one by one, and the effective utilization rate of the memory is only 50%. Therefore, in order to avoid process crash caused by insufficient physical memory during the COW, about half of the physical memory is usually reserved during actual use, resulting in low memory utilization rate and serious resource waste.
Based on the above technical problems, an embodiment of the present disclosure provides a method for synchronizing a full amount of buffered data, including: responding to the version deployment instruction, and performing dynamic consistency first-order verification on the version deployment condition in the version deployment process; after version deployment is completed, carrying out joint consistency verification according to fingerprint identification information of the version to be installed and a deployment file footprint feature information set, wherein the fingerprint identification information is used for representing features of the version to be installed, and the deployment file footprint feature information set is generated according to installation and card punching information in the deployment process; and outputting a version verification consistency report.
Fig. 1 schematically illustrates an application scenario diagram of a buffered data full synchronization method, apparatus, device, storage medium and program product according to an embodiment of the present disclosure.
As shown in fig. 1, the application scenario 100 according to this embodiment may include a cache data full synchronization scenario. The network 104 is used as a medium to provide communication links between the terminal devices 101, 102, 103 and the server 105. The network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
The user may interact with the server 105 via the network 104 using the terminal devices 101, 102, 103 to receive or send messages or the like. Various communication client applications, such as shopping class applications, web browser applications, search class applications, instant messaging tools, mailbox clients, social platform software, etc. (by way of example only) may be installed on the terminal devices 101, 102, 103.
The terminal devices 101, 102, 103 may be a variety of electronic devices having a display screen and supporting web browsing, including but not limited to smartphones, tablets, laptop and desktop computers, and the like.
The server 105 may be a cache database master node application server that uses a multithreading mode instead of a fork sub-process when performing master-slave full synchronization, uses a dual HASH architecture, one master HASH function is the same as the original community open source version, and the other is a cache HASH for temporarily storing incremental data during master-slave full synchronization. After receiving the master-slave node full synchronization instruction, starting a full synchronization thread, setting a full synchronization flag bit to perform data synchronization, and simultaneously responding to an operation instruction of a client through a master thread to perform data operation according to the operation type of the client and a target slot.
It should be noted that the method for synchronizing the full amount of cache data provided by the embodiments of the present disclosure may be generally performed by the server 105. Accordingly, the full-buffer synchronization device provided in the embodiments of the present disclosure may be generally disposed in the server 105. The cache data full synchronization method provided by the embodiments of the present disclosure may also be performed by a server or a server cluster that is different from the server 105 and is capable of communicating with the terminal devices 101, 102, 103 and/or the server 105. Accordingly, the cache data full synchronization apparatus provided by the embodiments of the present disclosure may also be provided in a server or a server cluster different from the server 105 and capable of communicating with the terminal devices 101, 102, 103 and/or the server 105.
It should be understood that the number of terminal devices, networks and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
It should be noted that, the method and the device for synchronizing the whole amount of the cache data determined by the embodiments of the present disclosure may be used in the technical field of cloud computing, or may be used in the technical field of finance, or may be used in any field other than the financial field, and the application field of the method and the device for synchronizing the whole amount of the cache data determined by the embodiments of the present disclosure is not limited.
Fig. 2a schematically illustrates a schematic diagram of a dual hash architecture provided according to an embodiment of the present disclosure, and fig. 2b schematically illustrates an architecture diagram of a buffered data full synchronization device provided according to an embodiment of the present disclosure. As shown in fig. 2a, a double hash architecture is used, one main hash table has the same function as the original community open source version and is used for storing cache data, and the other is used for temporarily storing incremental data when a full synchronization thread collides with a main thread during master-slave full synchronization. As shown in fig. 2b, the method includes a Redis process A1, and a process created when the Redis program is started includes a main thread A2 and a full-scale synchronization thread A3, wherein the main thread A2 is used for processing a request of a client, analyzing and executing a corresponding command, and executing a timing task; and the full synchronization thread A3 is used for full synchronization of the master node and the slave node and pushing data package to the slave node. And the main hash table A4 is used for storing all data of the Redis into the main hash table by default and is used for quickly searching keys. The cache hash table A5 is used for causing a data consistency problem if the main thread and the synchronous thread are processing the same SLOT when the master and slave are in full synchronization, and the main thread temporarily stores the data into the cache hash table. The atomic variable A6 is a full-volume synchronization flag bit and is used for indicating whether full-volume synchronization is in progress; the atomic variable A7 is a flag bit of each hash slot, and indicates the state of the current slot. The slots in the main hash table and the cache hash table are in one-to-one correspondence.
The cache data full synchronization method according to the embodiments of the present disclosure will be described in detail below by using fig. 3 to 7 based on the application scenario described in fig. 1 and the architecture described in fig. 2a and 2 b.
Fig. 3 schematically illustrates a flowchart of a method for cache data full synchronization according to an embodiment of the present disclosure. As shown in fig. 3, the buffered data full synchronization method of the embodiment includes operations S210 to S230, which may be performed by a server or other computing device. The cache data full synchronization method of the embodiment of the disclosure is applied to a cache database master node, the cache database can be, for example, a Redis database, the master node stores data by using a double-hash architecture, the double-hash architecture comprises a master hash table and a cache hash table, and the cache hash table is used for storing temporary incremental data.
In operation S210, in response to the master-slave node full synchronization instruction, a full synchronization thread is started and a full synchronization flag bit is set for data synchronization.
According to an embodiment of the present disclosure, the full synchronization flag bit is used to characterize a current full synchronization state of the master node.
In operation S220, a target slot and a client operation type are determined in response to an operation instruction of the client.
In operation S230, data read-write operation is performed according to the flag bit of the target slot, the full-scale synchronization flag bit, and the operation type of the client.
According to an embodiment of the present disclosure, a flag bit of a slot is used to characterize a current operation type of the slot.
In one example, after the master node receives the master-slave node FULL synchronization instruction, a fork sub-process is not needed, and a FULL synchronization thread is started to perform data synchronization, and a FULL synchronization FLAG bit is set, where the FULL synchronization FLAG bit full_sync_flag is used to characterize the current FULL synchronization state, and the values of the FULL synchronization FLAG bit full_sync_flag are divided into two types: "TRUE" indicates that full synchronization is in progress; "FALSE" means that full synchronization is not performed. The data synchronization process may refer to operations S211 to S213 shown in fig. 4.
In one example, when a slot in the master hash table is performing full synchronization, the master thread is still processing the operation request of the client, including, for example, a write request or a read request. The read request of the client and the data synchronization operation of the full synchronization thread do not conflict, namely whether the target slot requested by the client is performing full synchronization or not, the data return result can be read from the hash table normally.
In one example, when a target slot requested by a client is just in synchronization, and a request of the client is a write request, there may be a situation that write permission of the target slot cannot be obtained at this time, so that a problem of data consistency is caused.
In one example, to ensure data consistency between multiple threads and avoid read-write collision between threads, the embodiment of the disclosure adopts a CAS mechanism, specifically, determines a corresponding target slot and an operation type according to a key in a client operation instruction, updates a flag bit of the target slot according to the client operation type, and performs data operation according to an update return result. The specific process can be seen in the operational steps shown in fig. 5-7.
According to the cache data full synchronization method provided by the embodiment of the disclosure, full synchronization is performed through multiple threads, when a master-slave node full synchronization instruction is received, a full synchronization thread is started to perform data synchronization, a main thread is used for processing an operation request of a client, data is stored by using a double hash architecture, a main hash table is used for normally storing the data, and a cache hash table is used for storing temporary increment data when two threads collide. And after receiving the operation instruction of the client, determining the operation types of the target slot and the client, and performing data operation according to the zone bit of the target slot, the full synchronization zone bit and the operation types of the client. Compared with the related art, the cache data full-volume synchronization method provided by the embodiment of the disclosure improves and upgrades the Redis master-slave full-volume synchronization mechanism, avoids processing delay caused by short blocking of a fork sub-process during full-volume synchronization, and extra memory consumption caused by a COW mechanism during RDB generation, improves the utilization rate of physical memory, and avoids resource waste.
The process of data synchronization by the full synchronization thread will be described with reference to fig. 4, and fig. 4 schematically illustrates a flowchart of a method for data synchronization by the full synchronization thread according to another embodiment of the present disclosure. As shown in fig. 4, operation 210 includes operations S211 to S213.
In operation S211, the full synchronization flag bit and the flag bit of the slot to be synchronized are updated.
In operation S212, a data packing operation is performed on the slots to be synchronized.
In operation S213, after execution is completed, the flag bit of the slot to be synchronized is restored.
In one example, the flag bit of a SLOT is defined as HASH SLOT, and the values are classified into 4 classes: hash_slot_idle=0; indicating that no thread is operating on the slot; hash_slot_sync=1; indicating that this slot is performing full synchronization (read-only); hash_slot_write=2; indicating that the slot is performing a write operation; hash_slot_merge=3; indicating that a data merge update operation is in progress; the above value is judged to be processed only when full_sync_flag=true.
In one example, after the full synchronization thread is started, the full synchronization flag parameter is first set to TURE, the CAS command CAS (SLOTn, HASH_SLOT_IDLE, HASH_SLOT_SYNC) is executed, and the flag of the SLOT to be synchronized is set to HASH_SLOT_SYNC, indicating that the SLOT is performing full synchronization. The SLOT initial flag bit is HASH SLOT IDLE, which indicates that no thread is operating on the SLOT. Executing a data packing flow for the current SLOT, and recovering the flag bit after the execution is finished: CAS (hash_slot_sync, hash_slot_idle). It should be noted that, the above synchronization process is performed by the full synchronization thread, and the client request still exists in the main thread is not considered, that is, the flag bit of the SLOT to be synchronized defaults to hash_slot_idle, so that the CAS command can be successfully executed, and the value wire is returned.
The full synchronization process of the cached data when the client initiates the data operation request will be described in sequence with reference to fig. 5 to 7. When the main thread has a client request, the following cases are classified according to the response time of the client request and the sequence of the operation type and the execution time of the full synchronization thread.
Fig. 5 schematically illustrates a flowchart of a method for performing data read/write operations according to the flag bit of the target slot, the full-size synchronization flag bit, and the client operation type according to another embodiment of the present disclosure. As shown in fig. 5, operation S230 includes operations S231 to S232.
When it is determined that the client operation type is a read operation, a result is returned according to the data of the target slot in the cache Ha Xibiao table in operation S231.
Fig. 6 schematically illustrates a flowchart of a data synchronization method when a client operation type is a read operation according to another embodiment of the present disclosure. As shown in fig. 6, operation S231 includes operation S310 and operation S320.
In operation S310, if it is determined that the data of the target slot in the cache Ha Xibiao table is empty, the data of the target slot in the main hash table is read.
In operation S320, if it is determined that the data of the target slot in the cache Ha Xibiao table is not empty, the data of the target slot in the cache hash table is read.
In one example, when a client read-only class command is received, the main thread and the full synchronization thread do not conflict and do not affect each other even though they operate the same slot. Searching a corresponding target slot of the current key, judging whether the target slot in the cache hash table has data, if so, representing that the previous writing operation conflicts with the full synchronous thread, and reading and returning a result from the cache hash table; if the cache HASH is empty, the result is read from the master HASH table and returned.
In operation S232, when it is determined that the client operation type is a write operation and the full synchronization flag bit is full synchronization, performing a data write operation according to the flag bit of the target slot.
In one example, if the client has a write operation, the operation type of the target slot, that is, the current state of the target slot, needs to be determined by the flag bit of the target slot. And when the operation type of the client is determined to be writing operation and the full-quantity synchronization zone bit is in full-quantity synchronization, carrying out data writing operation according to the zone bit of the target slot. See, in particular, operations S410 to S440 shown in fig. 6.
Fig. 7 schematically illustrates a flowchart of a data synchronization method when a client operation type is a write operation according to another embodiment of the present disclosure.
As shown in fig. 7, operation S232 includes operations S410 to S440.
In operation S410, if it is determined that the target slot is performing the full synchronization operation according to the flag bit of the target slot, the flag bit of the target slot is updated.
In operation S420, when it is determined that the update of the flag bit of the target slot is successful, the client data is written into the target slot of the master hash table.
In operation S430, when it is determined that updating the flag bit of the target slot fails, a blocking waiting time is determined.
And when the blocking waiting time is greater than the preset threshold value, writing the client data into the target slot in the cache hash table in operation S440.
In one example, when it is determined that the full synchronization flag bit is TURE, the main thread executes the command CAS (SLOTn, HASH_SLOT_IDLE, HASH_SLOT_WRITE), if the main thread and the full synchronization thread operate on different SLOTs, i.e. the current flag bit of the target SLOT is HASH_SLOT_IDLE, it is indicated that no thread operates on the target SLOT, because the main thread and the full synchronization thread operate on different SLOTs, the CAS command is fixed to return true, and at this time, the flag bit of the target SLOT is updated to HASH_SLOT_WRITE (writing operation), and the client WRITE command can be executed in parallel without mutual influence, and the current target SLOT flag bit is restored after the command execution is completed.
In one example, when it is determined that the full synchronization flag bit is TURE, the main thread executes a command CAS (SLOTn, HASH_SLOT_IDLE, HASH_SLOT_WRITE), if the main thread and the full synchronization thread operate on the same SLOT and the full synchronization thread operate previously, then the flag bit of the target SLOT is HASH_SLOT_SYNC, and the CAS command is executed unsuccessfully, and returns to false, in order to preferentially ensure the timeliness of the client request response, in the embodiment of the disclosure, the blocking waiting time of each WRITE operation is recorded, that is, the client WRITE request is received at the main thread, the CAS command recording time T1 is executed for the first time, and the blocking waiting time is determined at the current time T2 when the return false is received. If the blocking waiting TIME is smaller than a preset threshold, namely T2-T1 is smaller than time_client_BLOCK, circularly executing the CSA command to update the flag bit of the target slot; if the blocking waiting TIME is greater than the preset threshold value, T2-t1 > = time_client_block, the timeliness of the request response of the CLIENT needs to be guaranteed preferentially, the retry is not continued, the data is temporarily stored in the cache hash table, and the processing result is returned to the CLIENT.
After each time the main thread executes a client command, in order to ensure that the data in the main hash table is the latest and complete data, the data merging operation of the cache hash table and the main hash table is performed.
Fig. 8a schematically illustrates one of the flowcharts of a data merging method provided according to yet another embodiment of the present disclosure, and fig. 8b schematically illustrates the second of the flowcharts of a data merging method provided according to yet another embodiment of the present disclosure. As shown in fig. 8a, including operation S510,
in operation S510, data in the cache hash table is merged into the master hash table.
As shown in fig. 8b, operation 510 includes operations S511 to S512.
In operation S511, when it is determined that there is data in any slot in the cache hash table, the flag bit of any slot is changed.
In operation S512, the data in the cache hash table is moved to the master hash table.
According to an embodiment of the present disclosure, the time stamps of the same slot data in the cache hash table and the master hash table are compared. If the time stamp of the same slot data of the cache hash table is later than the time stamp of the same slot data in the main hash table, the data of the slot of the cache hash table are synchronized to the same slot of the main hash table.
In one example, after each client command is executed by the main thread, it is determined whether the cache hash table SLOTn has data in the before sleep () function of the main loop. If the data exists, executing a command, and attempting to update the slot SLOTn flag bit: CAS (hash_slot_idle, hash_slot_merge). If the updating is successful, the CAS returns to the wire, the current SLOTn data is moved to the main hash table, the data integrity and consistency are maintained, and the flag bit is recovered as follows: CAS (hash_slot_merge, hash_slot_idle). If the update fails, the CAS returns false, then the current SLOT index is added with 1, and the operation is continued in the next cycle.
In one example, in the process of merging data, merging may be performed according to a timestamp of the same slot data, and specifically, the data with the timestamp closest to the current time, that is, the latest data, is reserved.
By the cache data full-volume synchronization method provided by the embodiment of the disclosure, a CAS mechanism is used, under the condition of no blocking, the consistency of data between a main thread and the full-volume synchronization thread is ensured, the cache hash table and the main hash table are combined in time, and the integrity and the accuracy of the data in the main hash table are ensured.
Based on the cache data total synchronization method, the disclosure also provides a cache data total synchronization device. The device will be described in detail below in connection with fig. 9.
Fig. 9 schematically illustrates a block diagram of a buffered data full synchronization apparatus according to an embodiment of the present disclosure. As shown in fig. 9, the buffered data full synchronization device 800 of this embodiment includes a data synchronization module 810, a determination module 820, and a data manipulation module 830.
The data synchronization module 810 is configured to start a full synchronization thread and set a full synchronization flag for data synchronization in response to a master-slave node full synchronization instruction, where the full synchronization flag is used to characterize a current full synchronization state of a master node. In an embodiment, the data synchronization module 810 may be configured to perform the operation S210 described above, which is not described herein.
The determining module 820 is configured to determine a target slot and a client operation type in response to an operation instruction of the client. In an embodiment, the determining module 820 may be configured to perform the operation S220 described above, which is not described herein.
The data operation module 830 is configured to perform data read-write operation according to the flag bit of the target slot, the full synchronization flag bit, and the client operation type, where the flag bit of the slot is used to characterize a current operation type of the slot. In an embodiment, the data operation module 830 may be configured to perform the operation S230 described above, which is not described herein.
According to an embodiment of the present disclosure, the data operation module includes: a first determination sub-module and a second determination sub-module.
And the first determining submodule is used for returning a result according to the data of the target slot in the cache Ha Xibiao table when the operation type of the client is determined to be the read operation. In an embodiment, the first determining sub-module may be used to perform the operation S231 described above, which is not described herein.
And the second determining submodule is used for performing data writing operation according to the flag bit of the target slot when the client operation type is determined to be writing operation and the full synchronization flag bit is determined to be full synchronization. In an embodiment, the second determining sub-module may be used to perform the operation S232 described above, which is not described herein.
According to an embodiment of the present disclosure, the second determination submodule includes: an updating unit, a first determining unit, a second determining unit and a writing unit,
and the updating unit is used for updating the zone bit of the target slot if the target slot is determined to be subjected to full synchronization operation according to the zone bit of the target slot. In an embodiment, the updating unit may be configured to perform the operation S410 described above, which is not described herein.
And the first determining unit is used for writing the client data into the target slot of the main hash table when the flag bit of the target slot is successfully updated. In an embodiment, the first determining unit may be configured to perform the operation S420 described above, which is not described herein.
And the second determining unit is used for determining blocking waiting time when determining that the updating of the flag bit of the target slot fails. In an embodiment, the second determining unit may be configured to perform the operation S430 described above, which is not described herein.
And the writing unit is used for writing the client data into the target slot in the cache hash table when the blocking waiting time is larger than a preset threshold value. In an embodiment, the writing unit may be used to perform the operation S440 described above, which is not described herein.
According to an embodiment of the present disclosure, the first determination submodule includes: a third determination unit and a fourth determination unit.
And the third determining unit is used for reading the data of the target slot in the main hash table if the data of the target slot in the cache Ha Xibiao table is determined to be empty. In an embodiment, the third determining unit may be configured to perform the operation S310 described above, which is not described herein.
And a fourth determining unit, configured to read the data of the target slot in the cache hash table if it is determined that the data of the target slot in the cache Ha Xibiao table is not empty. In an embodiment, the fourth determining unit may be configured to perform the operation S320 described above, which is not described herein.
According to an embodiment of the present disclosure, further comprising: and a data merging module.
And the data merging module is used for merging the data in the cache hash table into the main hash table. In an embodiment, the data merging module may be configured to perform the operation S510 described above, which is not described herein.
According to an embodiment of the present disclosure, the data merging module includes: a first update sub-module and a data merge sub-module,
the first updating sub-module is used for changing the zone bit of any slot when determining that data exists in any slot in the cache hash table; in an embodiment, the first update sub-module may be used to perform the operation S511 described above, which is not described herein.
And the data merging sub-module is used for moving the data in the cache hash table to the main hash table. In an embodiment, the data merging sub-module may be used to perform the operation S512 described above, which is not described herein.
According to an embodiment of the present disclosure, the data synchronization module 810 includes: the system comprises a second updating sub-module, a data packing sub-module and a third updating sub-module.
And the second updating sub-module is used for updating the full synchronization zone bit and the zone bit of the slot to be synchronized. In an embodiment, the second update sub-module may be used to perform the operation S211 described above, which is not described herein.
And the data packing sub-module is used for executing data packing operation on the slots to be synchronized. In an embodiment, the data packing sub-module may be used to perform the operation S212 described above, which is not described herein.
And the third updating sub-module is used for recovering the zone bit of the slot to be synchronized after the execution is completed. In an embodiment, the third update sub-module may be used to perform the operation S213 described above, which is not described herein.
Any of the data synchronization module 810, the determination module 820, and the data manipulation module 830 may be combined in one module to be implemented, or any of the modules may be split into a plurality of modules, according to embodiments of the present disclosure. Alternatively, at least some of the functionality of one or more of the modules may be combined with at least some of the functionality of other modules and implemented in one module. According to embodiments of the present disclosure, at least one of the data synchronization module 810, the determination module 820, and the data manipulation module 830 may be implemented at least in part as hardware circuitry, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system-on-chip, a system-on-a-substrate, a system-on-package, an Application Specific Integrated Circuit (ASIC), or in hardware or firmware, such as any other reasonable manner of integrating or packaging the circuitry, or in any one of or a suitable combination of any of the three. Alternatively, at least one of the data synchronization module 810, the determination module 820, and the data manipulation module 830 may be at least partially implemented as a computer program module that, when executed, performs the corresponding functions.
Fig. 10 schematically illustrates a block diagram of an electronic device adapted to implement a buffered data full synchronization method according to an embodiment of the present disclosure.
As shown in fig. 10, an electronic device 900 according to an embodiment of the present disclosure includes a processor 901 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 902 or a program loaded from a storage portion 908 into a Random Access Memory (RAM) 903. The processor 901 may include, for example, a general purpose microprocessor (e.g., a CPU), an instruction set processor and/or an associated chipset and/or a special purpose microprocessor (e.g., an Application Specific Integrated Circuit (ASIC)), or the like. Processor 901 may also include on-board memory for caching purposes. Processor 901 may include a single processing unit or multiple processing units for performing the different actions of the method flows according to embodiments of the present disclosure.
In the RAM 903, various programs and data necessary for the operation of the electronic device 900 are stored. The processor 901, the ROM 902, and the RAM 903 are connected to each other by a bus 904. The processor 901 performs various operations of the method flow according to the embodiments of the present disclosure by executing programs in the ROM 902 and/or the RAM 903. Note that the program may be stored in one or more memories other than the ROM 902 and the RAM 903. The processor 901 may also perform various operations of the method flow according to embodiments of the present disclosure by executing programs stored in the one or more memories.
According to an embodiment of the disclosure, the electronic device 900 may also include an input/output (I/O) interface 905, the input/output (I/O) interface 905 also being connected to the bus 904. The electronic device 900 may also include one or more of the following components connected to the I/O interface 905: an input portion 908 including a keyboard, a mouse, and the like; an output portion 907 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and a speaker; a storage portion 908 including a hard disk or the like; and a communication section 909 including a network interface card such as a LAN card, a modem, or the like. The communication section 909 performs communication processing via a network such as the internet. The drive 910 is also connected to the I/O interface 905 as needed. A removable medium 911 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is installed as needed on the drive 910 so that a computer program read out therefrom is installed into the storage section 908 as needed.
The present disclosure also provides a computer-readable storage medium that may be embodied in the apparatus/device/system described in the above embodiments; or may exist alone without being assembled into the apparatus/device/system. The computer-readable storage medium carries one or more programs that, when executed, implement a cache data full synchronization method according to an embodiment of the present disclosure.
According to embodiments of the present disclosure, the computer-readable storage medium may be a non-volatile computer-readable storage medium, which may include, for example, but is not limited to: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. For example, according to embodiments of the present disclosure, the computer-readable storage medium may include ROM 902 and/or RAM 903 and/or one or more memories other than ROM 902 and RAM 903 described above.
Embodiments of the present disclosure also include a computer program product comprising a computer program containing program code for performing the methods shown in the flowcharts. When the computer program product runs in a computer system, the program code is used for enabling the computer system to realize the cache data full synchronization method provided by the embodiment of the disclosure.
The above-described functions defined in the system/apparatus of the embodiments of the present disclosure are performed when the computer program is executed by the processor 901. The systems, apparatus, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the disclosure.
In one embodiment, the computer program may be based on a tangible storage medium such as an optical storage device, a magnetic storage device, or the like. In another embodiment, the computer program may also be transmitted, distributed, and downloaded and installed in the form of a signal on a network medium, via communication portion 909, and/or installed from removable medium 911. The computer program may include program code that may be transmitted using any appropriate network medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.
In such an embodiment, the computer program may be downloaded and installed from the network via the communication portion 909 and/or installed from the removable medium 911. The above-described functions defined in the system of the embodiments of the present disclosure are performed when the computer program is executed by the processor 901. The systems, devices, apparatus, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the disclosure.
According to embodiments of the present disclosure, program code for performing computer programs provided by embodiments of the present disclosure may be written in any combination of one or more programming languages, and in particular, such computer programs may be implemented in high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. Programming languages include, but are not limited to, such as Java, c++, python, "C" or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of remote computing devices, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., connected via the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Those skilled in the art will appreciate that the features recited in the various embodiments of the disclosure and/or in the claims may be provided in a variety of combinations and/or combinations, even if such combinations or combinations are not explicitly recited in the disclosure. In particular, the features recited in the various embodiments of the present disclosure and/or the claims may be variously combined and/or combined without departing from the spirit and teachings of the present disclosure. All such combinations and/or combinations fall within the scope of the present disclosure.
The embodiments of the present disclosure are described above. However, these examples are for illustrative purposes only and are not intended to limit the scope of the present disclosure. Although the embodiments are described above separately, this does not mean that the measures in the embodiments cannot be used advantageously in combination. The scope of the disclosure is defined by the appended claims and equivalents thereof. Various alternatives and modifications can be made by those skilled in the art without departing from the scope of the disclosure, and such alternatives and modifications are intended to fall within the scope of the disclosure.

Claims (11)

1. The utility model provides a method for synchronizing the whole amount of cache data, which is characterized in that the method is applied to a main node of a cache database, the main node uses a double hash architecture to store data, the double hash architecture comprises a main hash table and a cache hash table, the cache hash table is used for storing temporary increment data, and the method comprises the following steps:
Responding to a master-slave node full synchronization instruction, starting a full synchronization thread and setting a full synchronization flag bit for data synchronization, wherein the full synchronization flag bit is used for representing the current full synchronization state of a master node;
determining a target slot and a client operation type in response to an operation instruction of the client; and
and performing data read-write operation according to the zone bit of the target slot, the full synchronization zone bit and the client operation type, wherein the zone bit of the slot is used for representing the current operation type of the slot.
2. The method of claim 1, wherein the performing data read-write operations according to the flag bit of the target slot, the full synchronization flag bit, and the client operation type comprises:
when the operation type of the client is determined to be read operation, returning a result according to the data of the target slot in the cache Ha Xibiao table;
and when the operation type of the client is determined to be writing operation and the full-quantity synchronization zone bit is in full-quantity synchronization, carrying out data writing operation according to the zone bit of the target slot.
3. The method of claim 2, wherein the performing a data write operation according to the flag bit of the target slot comprises:
If the target slot is determined to be performing full synchronization operation according to the flag bit of the target slot, updating the flag bit of the target slot;
when the flag bit of the target slot is successfully updated, writing the client data into the target slot of the main hash table;
when the flag bit of the target slot is determined to be failed to be updated, determining blocking waiting time; and
and when the blocking waiting time is greater than a preset threshold value, writing the client data into a target slot in a cache hash table.
4. The method of claim 2, wherein the returning the result from the data of the target slot in the cache Ha Xibiao table comprises:
if the data of the target slot in the cache Ha Xibiao table is determined to be empty, the data of the target slot in the main hash table is read; and
and if the data of the target slot in the cache Ha Xibiao table is determined not to be empty, reading the data of the target slot in the cache hash table.
5. The method according to any one of claims 1 to 4, further comprising, after performing data read-write operations according to the flag bit of the target slot, the full synchronization flag bit, and the client operation type:
And merging the data in the cache hash table into the main hash table.
6. The method of claim 5, wherein merging the data in the cache hash table into the master hash table comprises:
when determining that data exists in any slot in the cache hash table, changing the flag bit of any slot; and
and moving the data in the cache hash table to a main hash table.
7. The method of claim 6, wherein the starting the full synchronization thread and setting the full synchronization flag for data synchronization comprises:
updating the full synchronization zone bit and the zone bit of the slot to be synchronized;
executing data packing operation on the slots to be synchronized; and
and after the execution is completed, recovering the flag bit of the slot to be synchronized.
8. A cache data full synchronization device, applied to a cache database master node, the master node storing data using a double hash architecture, the double hash architecture including a master hash table and a cache hash table, the cache hash table being used for storing temporary incremental data, the device comprising:
the data synchronization module is used for responding to a master-slave node full synchronization instruction, starting a full synchronization thread and setting a full synchronization zone bit to perform data synchronization, wherein the full synchronization zone bit is used for representing the current full synchronization state of the master node; and
The determining module is used for responding to the operation instruction of the client and determining the operation type of the target slot and the client; and
and the data operation module is used for performing data read-write operation according to the zone bit of the target slot, the full synchronization zone bit and the client operation type, wherein the zone bit of the slot is used for representing the current operation type of the slot.
9. An electronic device, comprising:
one or more processors;
storage means for storing one or more programs,
wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to perform the cache data full synchronization method of any of claims 1-7.
10. A computer readable storage medium having stored thereon executable instructions which when executed by a processor cause the processor to perform the buffered data full synchronization method of any of claims 1-7.
11. A computer program product comprising a computer program which, when executed by a processor, implements the cache data full synchronization method according to any one of claims 1 to 7.
CN202310720377.3A 2023-06-16 2023-06-16 Cache data full synchronization method and device Pending CN116737724A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310720377.3A CN116737724A (en) 2023-06-16 2023-06-16 Cache data full synchronization method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310720377.3A CN116737724A (en) 2023-06-16 2023-06-16 Cache data full synchronization method and device

Publications (1)

Publication Number Publication Date
CN116737724A true CN116737724A (en) 2023-09-12

Family

ID=87904142

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310720377.3A Pending CN116737724A (en) 2023-06-16 2023-06-16 Cache data full synchronization method and device

Country Status (1)

Country Link
CN (1) CN116737724A (en)

Similar Documents

Publication Publication Date Title
CN108121782B (en) Distribution method of query request, database middleware system and electronic equipment
KR101963917B1 (en) Automatic synchronization of most recently used document lists
US9946582B2 (en) Distributed processing device and distributed processing system
US10838977B2 (en) Key-value replication with consensus protocol
EP4239492A1 (en) Object processing method and apparatus, computer device, and storage medium
US10609141B2 (en) Methods and apparatuses for cluster switching
CN111291062B (en) Data synchronous writing method and device, computer equipment and storage medium
EP4276651A1 (en) Log execution method and apparatus, and computer device and storage medium
US20200293412A1 (en) Log Management Method, Server, and Database System
CN114741449A (en) Object storage method and device based on distributed database
CN115098228A (en) Transaction processing method and device, computer equipment and storage medium
WO2023065868A1 (en) Transaction execution method and apparatus, and computer device and storage medium
CN111309799A (en) Method, device and system for realizing data merging and storage medium
CN116737724A (en) Cache data full synchronization method and device
CN115543970B (en) Data page processing method, data page processing device, electronic equipment and storage medium
CN110309224B (en) Data copying method and device
CN117435569A (en) Dynamic capacity expansion method, device, equipment, medium and program product for cache system
CN115344550A (en) Method, device and medium for cloning directories of distributed file system
CN111708626B (en) Data access method, device, computer equipment and storage medium
CN116974983A (en) Data processing method, device, computer readable medium and electronic equipment
US10776344B2 (en) Index management in a multi-process environment
CN112463419A (en) Main and standby node working method and device based on middleware and electronic equipment
CN109597683B (en) Memory management method, device, equipment and storage medium
CN114077593A (en) Log synchronization method, device, equipment and medium for database
CN106055322A (en) Flow scheduling method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination