WO2022258163A1 - Cascaded data mover for cascaded backup system and method of cascaded backup - Google Patents
Cascaded data mover for cascaded backup system and method of cascaded backup Download PDFInfo
- Publication number
- WO2022258163A1 WO2022258163A1 PCT/EP2021/065399 EP2021065399W WO2022258163A1 WO 2022258163 A1 WO2022258163 A1 WO 2022258163A1 EP 2021065399 W EP2021065399 W EP 2021065399W WO 2022258163 A1 WO2022258163 A1 WO 2022258163A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- snapshot
- cascaded
- data
- bitmap
- backup data
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/20—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
- G06F11/2053—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
- G06F11/2094—Redundant storage or storage space
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/14—Error detection or correction of the data by redundancy in operation
- G06F11/1402—Saving, restoring, recovering or retrying
- G06F11/1446—Point-in-time backing up or restoration of persistent data
- G06F11/1448—Management of the data involved in backup or backup restore
- G06F11/1451—Management of the data involved in backup or backup restore by selection of backup contents
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/20—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
- G06F11/202—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant
- G06F11/2041—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant with more than one idle spare processing component
Definitions
- the present disclosure relates generally to the field of data protection and backup; and more specifically, to a cascaded data mover for a cascaded backup system and a method of cascaded backup.
- data is backed up from a primary storage to a secondary storage.
- the backed up data may then be transferred to a recovery site, such as a tertiary storage, to protect and recover data in an event of data loss in the secondary storage.
- a recovery site such as a tertiary storage
- Examples of the event of data loss may include, but is not limited to, data corruption, hardware or software failure, accidental deletion of data, hacking, or malicious attack.
- a separate recovery site is generally extensively used to store a copy of the data present in the secondary storage.
- Such primary, secondary, and tertiary storages may collectively be referred to as a cascaded CDP system.
- all the data in the primary storage being a protected item (e.g., virtual machine) is backed up to the secondary storage whenever any change is made in the data, and the data from the secondary storage is replicated to the tertiary storage.
- a protected item e.g., virtual machine
- the blocks written on the primary storage are backed up to the secondary storage, and so on, continuously.
- the primary, secondary, and tertiary storages usually need to be synchronized in a shortest time possible to prevent any data loss due to a long time taken usually for synchronization.
- the backup data is generally provided by the primary storage to secondary storage in a quick time. Thus, there may be many backup points created in the secondary storage.
- a first transfer of a snapshot of backup data to the tertiary storage can take much longer time than to transfer from the primary storage to the secondary storage due to bandwidth limitations.
- copying the first snapshot to the tertiary storage can then be started.
- a new snapshot may be available on the secondary storage.
- the present disclosure provides a cascaded data mover for a cascaded backup system and a method of cascaded backup.
- the present disclosure provides a solution to the existing problem of a risk of data loss and a long wait time period for a tertiary storage when a snapshot is being copied from a secondary storage to the tertiary storage and another (new) snapshot is available at the secondary storage for copying to the tertiary storage. This long waiting time period increases the risk of data loss.
- An aim of the present disclosure is to provide a solution that overcomes at least partially the problems encountered in prior art, and provide an improved cascaded backup system and method of cascaded backup which enables switching (i.e., jumping) to a new snapshot while copying a current snapshot from the secondary storage to the tertiary storage only when a defined threshold percentage of data has been copied from the current snapshot.
- the present disclosure provides a cascaded data mover for a cascaded backup system, the cascaded backup system comprising a primary storage with a primary volume for storing data of a computing system, a secondary storage for storing a backup data of the primary volume obtained from the primary storage, a tertiary storage for storing a cascaded backup data of the primary volume obtained from the secondary storage, and a data mover for replicating the primary volume to the backup data on the secondary storage, the cascaded data mover comprising: a replication unit configured to replicate the backup data on the secondary storage to the cascaded backup data on the tertiary storage, by means of: creating a bitmap of the blocks change between a last snapshot of the primary volume copied to the cascaded backup data and a current snapshot of the primary volume available in the backup data, and copying blocks from the current snapshot to the cascade
- the system of the present disclosure provides improved data storage from the secondary storage to the tertiary storage when the current snapshot is copied to the tertiary storage.
- the system ensures that the cascaded data mover do not move to a newer snapshot before completely coping a certain amount of data from the current snapshot.
- the system of the present disclosure is not at a risk of never finishing a process of copying data to the tertiary storage.
- the cascaded data mover of the system of the present disclosure reaches to a consistent state from which restoration can be done when needed.
- a risk of data loss in conventional approaches is significantly reduced in the system of the present disclosure.
- the certain percent of the current snapshot is defined by a replication threshold.
- the replication threshold ensures that in comparison to the conventional approach, there is no risk of not finishing the process of copying the data from the secondary storage to the tertiary storage.
- the replication threshold is increased each time when the cascaded data mover jumps to a new snapshot until the replication threshold reaches 100 percent.
- the replication threshold is increased till 100 percent is reached to enable the tertiary storage to achieve a consistency with the secondary storage.
- the replication threshold is decreased to an initial value after 100 percent of the current snapshot of the primary volume is copied to the cascaded backup data.
- the replication threshold is decreased after 100 percent is reached to enable copying of data from given newer snapshots after a certain percentage of a given current snapshot is copied.
- the tertiary storage has improved consistency with the secondary storage.
- the replication unit is configured to create a bitmap of the size of the current snapshot of the primary volume available in the backup data to replicate the backup data on the secondary storage to the cascaded backup data on the tertiary storage.
- the bitmap enables the current snapshot to be copied to the tertiary storage to enable synchronization of data between the primary storage, the secondary storage and the tertiary storage.
- the present disclosure provides a method of cascaded backup, comprising: storing data of a computing system in a primary volume on a primary storage, replicating, by a data mover, the primary volume to a backup data on a secondary storage, replicating, by a cascaded data mover, the backup data on the secondary storage to a cascaded backup data on a tertiary storage, by means of: creating a bitmap of the blocks change between a last snapshot of the primary volume copied to the cascaded backup data and a current snapshot of the primary volume available in the backup data, and copying blocks from the current snapshot to the cascaded backup data according to the bitmap, and jumping, by the cascaded data mover, to a newer snapshot of the primary volume when the newer snapshot is available in the backup data and a certain percent of the current snapshot is copied to the cascaded backup data, by means of: creating a difference bitmap of the difference between the current snapshot and the newer snapshot, merging the difference bitmap with the bitmap, and copy
- the method achieves all the advantages and effects of the method of the present disclosure.
- FIG. 1 A is a block diagram of a cascaded backup system, in accordance with an embodiment of the present disclosure
- FIG. IB is a block diagram that illustrates various exemplary components of a primary storage of the cascaded backup system, in accordance with an embodiment of the present disclosure
- FIG. 1C is a block diagram that illustrates various exemplary components of a secondary storage of the cascaded backup system, in accordance with an embodiment of the present disclosure
- FIG. ID is a block diagram that illustrates various exemplary components of a tertiary storage of the cascaded backup system, in accordance with an embodiment of the present disclosure
- FIG. 2 is a flowchart of a method of cascaded backup, in accordance with an embodiment of the present disclosure
- FIG. 3 is a block diagram of a cascaded backup system that depicts data transfer from a primary storage to a secondary storage and a tertiary storage, in accordance with an embodiment of the present disclosure.
- FIG. 4 is a timing diagram of a cascaded backup system that depicts data transfer from a primary storage to a secondary storage and a tertiary storage, in accordance with an embodiment of the present disclosure.
- an underlined number is employed to represent an item over which the underlined number is positioned or an item to which the underlined number is adjacent.
- a non-underlined number relates to an item identified by a line linking the non- underlined number to the item. When a number is non-underlined and accompanied by an associated arrow, the non-underlined number is used to identify a general item at which the arrow is pointing.
- FIG. 1 A is a block diagram of a cascaded backup system, in accordance with an embodiment of the present disclosure.
- a block diagram 100A of a cascaded backup system 102 that includes a primary storage 104, a secondary storage 106, a tertiary storage 108.
- a primary volume 110 There is shown a backup data 112, a cascaded backup data 114, a data mover 116, a cascaded data mover 118, a replication unit 120, a control unit 122, a bitmap 124, a difference bitmap 126, a merged bitmap 128, a last snapshot 130, a current snapshot 132 and a newer snapshot 134.
- a first communication network 136 and a second communication network 138 There is further shown a first communication network 136 and a second communication network 138.
- the units, such as the replication unit 120 and the control unit 122 may also be referred to as a replication circuit and a control circuit, respectively.
- the present disclosure provides a cascaded data mover 118 for a cascaded backup system 102, the cascaded backup system 102 comprising a primary storage 104 with a primary volume 110 for storing data of a computing system, a secondary storage 106 for storing a backup data 112 of the primary volume 110 obtained from the primary storage 104, a tertiary storage 108 for storing a cascaded backup data 114 of the primary volume 110 obtained from the secondary storage 106, and a data mover 116 for replicating the primary volume 110 to the backup data 112 on the secondary storage 106, the cascaded data mover 118 comprising: a replication unit 120 configured to replicate the backup data 112 on the secondary storage 106 to the cascaded backup data 114 on the tertiary storage 108, by means of: creating a bitmap 124 of the blocks change between a last snapshot 130 of the primary volume 110 copied to the cascaded backup data 114 and a current snapshot 132 of the primary volume 110 available in the backup
- the primary storage 104 comprises the primary volume 110 for storing data of the computing system.
- the primary storage 104 includes suitable logic, circuitry, and interfaces that may be configured to store data from the computing systems such as virtual machines or physical machines. In an example, the primary storage 104 may be part of the computing system.
- the data received from the primary storage 104 is stored in the primary volume 110 such as a storage disk. Examples of implementation of the primary storage 104 may include, but are not limited to, a production environment storage, a data storage server or a datacentre that include one or more hard disk drives, often arranged into logical, redundant storage containers or Redundant Array of Inexpensive Disks (RAID).
- RAID Redundant Array of Inexpensive Disks
- the secondary storage 106 is configured for storing the backup data 112 of the primary volume 110 obtained from the primary storage 104 and the bitmap 124, the difference bitmap 126, the merged bitmap 128, the last snapshot 130, the current snapshot 132 and the newer snapshot 134.
- the secondary storage 106 includes suitable logic, circuitry, and interfaces that may be configured to back-up the data stored in the primary storage 104 for recovery whenever needed.
- the secondary storage 106 stores the data received from the primary storage 104 in the backup data 112.
- the secondary storage 106 may be a Network Attached Storage (NAS). Examples of implementation of the secondary storage 106 may include, but are not limited to, block-based storages, storage arrays and the like.
- the tertiary storage 108 is configured for storing the cascaded backup data 114 of the primary volume 110 obtained from the secondary storage 106.
- the tertiary storage 108 includes suitable logic, circuitry, and interfaces that may be configured to store the back-up data of the secondary storage 106 for restore in case of data loss at the secondary storage 106.
- the tertiary storage 108 stores the data received from the secondary storage 106 in the cascaded backup data 114.
- the tertiary storage 108 may be a cloud based storage, storage arrays and the like.
- the data mover 116 includes suitable logic, circuitry, and interfaces that may be configured to enable replicating the primary volume 110 to the backup data 112 on the secondary storage 106 for establishing a synchronization between the primary storage 104 and the secondary storage 106.
- the data mover 116 may be implemented as a software module or circuitry.
- the cascaded data mover 118 includes suitable logic, circuitry, and interfaces that may be configured to enable storing of data in the tertiary storage 108 from the secondary storage 106 for establishing a synchronization between the secondary storage 106 and the tertiary storage 108.
- the cascaded data mover 118 may be implemented as a software module or circuitry.
- the first communication network 136 includes a medium (e.g., a communication channel) through which the primary storage 104 communicates the secondary storage 106.
- the second communication network 138 includes a medium (e.g., a communication channel) through which the secondary storage 106 communicates the tertiary storage 108.
- the first communication network 136 and the second communication network 138 may be a wired or wireless communication network.
- Examples of the first communication network 136 and the second communication network 138 may include, but are not limited to, a Wireless Fidelity (Wi-Fi) network, a Local Area Network (LAN), a wireless personal area network (WPAN), a Wireless Local Area Network (WLAN), a wireless wide area network (WWAN), a cloud network, a Long Term Evolution (LTE) network, a Metropolitan Area Network (MAN), or the Internet.
- Wi-Fi Wireless Fidelity
- LAN Local Area Network
- WLAN wireless personal area network
- WLAN Wireless Local Area Network
- WWAN wireless wide area network
- cloud network a cloud network
- LTE Long Term Evolution
- MAN Metropolitan Area Network
- the replication unit 120 of the cascaded data mover 118 is configured to replicate the backup data 112 on the secondary storage 106 to the cascaded backup data 114 on the tertiary storage 108, by means of: creating a bitmap 124 of the blocks change between a last snapshot 130 of the primary volume 110 copied to the cascaded backup data 114 and a current snapshot 132 of the primary volume 110 available in the backup data 112, and copying blocks from the current snapshot 132 to the cascaded backup data 114 according to the bitmap 124.
- the last snapshot 130 includes the data stored in the blocks of the primary volume 110, at a first time, which is already copied to the cascaded backup data 114 to enable recovery of the data whenever needed.
- the current snapshot 132 includes the data stored in the blocks of the primary volume 110, at a second time, which is available for copying to the cascaded backup data 114.
- the first time is before the second time i.e., the last snapshot 130 is taken at a time which is earlier than a time at which the current snapshot 132 is taken.
- the replication unit 120 is configured to create the bitmap 124 which provides information about the blocks which have changed or updated data from the time at which the last snapshot 130 is taken to the time at which the current snapshot 132 is taken.
- the blocks in the primary volume 110 may keep changing based on data provided by the computing system.
- the blocks having the changed or updated data in the current snapshot 132 are copied to the cascaded backup data 114.
- the tertiary storage 108 is consistent with the secondary storage 106.
- the control unit 122 of the cascaded data mover 118 is configured to determine when a newer snapshot 134 of the primary volume 110 is available in the backup data 112 and instruct the replication unit 120 to jump to the newer snapshot 134 when a certain percent of the current snapshot 132 is copied to the cascaded backup data 114, by means of: creating a difference bitmap 126 of the difference between the current snapshot 132 and the newer snapshot 134, merging the difference bitmap 126 with the bitmap 124, and copying blocks from the newer snapshot 134 according to the merged bitmap 128.
- the primary storage 104 is configured to provide the newer snapshot 134 of the primary volume 110 to the secondary storage 106.
- the newer snapshot 134 includes the data stored in the blocks of the primary volume 110, at a third time.
- the third time is after the second time i.e., the newer snapshot 134 is taken after the current snapshot 132 is taken.
- the newer snapshot 134 may be received by the secondary storage 106 when the current snapshot 132 is being copied to the cascaded backup data 114.
- the control unit 122 detects when the newer snapshot 134 is available at the secondary storage 106. This detection may be based on parameters like time, date, name and the like. Further, the control unit 122 provides instructions to the replication unit 120 to jump to the newer snapshot 134 when a certain percentage of the current snapshot 132 is copied to the cascaded backup data 114. In an example, this percentage may be 10 percent.
- the replication unit 120 is configured to create the difference bitmap 126 which provides information about the blocks which have changed or updated data from the time at which the current snapshot 132 is taken to the time at which the newer snapshot 134 is taken. Further, after creating the difference bitmap 126, the difference bitmap 126 is merged to the bitmap 124 and blocks from the newer snapshot 134 are copied to the cascaded backup data 114. Thus, the blocks in the current snapshot 132 which are left to be copied to the cascaded backup data 114 are now replaced with the blocks having changed or updated data from the newer snapshot 134.
- the system 102 in comparison to conventional systems does not have any risk of never finishing a process of copying the data from the secondary storage 106 to the tertiary storage 108 and thus the tertiary storage 108 reaches a consistent state from which restoration can be done.
- the certain percent of the current snapshot 132 is defined by a replication threshold.
- the replication threshold refers to parameter that indicates an amount of data that must be copied from the current snapshot 132 before moving to read from the newer snapshot 134.
- the replication threshold ensures that in comparison to the conventional approach, there is no risk of not finishing the process of copying the data from the secondary storage 106 to the tertiary storage 108.
- the replication threshold is increased each time when the cascaded data mover 118 jumps to a new snapshot until the replication threshold reaches 100 percent.
- an initial value of the replication threshold may be 10 percent and may be increased by 10 percent each time when the cascaded data mover 118 jumps to the new snapshot.
- an initial value of the replication threshold may be 10 percent and may be increased by 5 percent each time.
- an initial value of the replication threshold may be 15 percent and may be increased by 5 percent each time. The replication threshold is increased till 100 percent is reached to enable the tertiary storage 108 to achieve a consistency with the secondary storage 106.
- the replication threshold is decreased to an initial value after 100 percent of the current snapshot 132 of the primary volume 110 is copied to the cascaded backup data 114. In an example, the replication threshold is decreased to an initial value of 10 percent after 100 percent of the current snapshot 132 is copied to the cascaded backup data 114. At 100 percent the tertiary storage 108 achieves a consistency with the secondary storage 106. The replication threshold is decreased after 100 percent is reached to enable copying of data from given newer snapshots after a certain percentage of a given current snapshot is copied. Thus, the tertiary storage 108 has improved consistency with the secondary storage 106. According to an embodiment, the replication threshold is increased linearly or exponentially.
- the new percentage of the replication threshold after each jump can be larger than the previous one by a constant amount or by a larger difference.
- an initial value of the replication threshold may be 10 percent which may be increased linearly by 10 percent till 100 percent.
- an initial value of the replication threshold may be 10 percent which may be increased exponentially by 5 percent, 15 percent, 30 percent and 40 percent.
- the aforesaid replication threshold may be ten percent.
- the present disclosure starts by coping the current snapshot 132 and does not move to newer snapshot 134 until at least 10% of current snapshot 132 is copied, even if the newer snapshot 134 is available before 10 percent of copying of the current snapshot 132. Further, after coping at least 10 percent of the current snapshot 132, if a new snapshot exists (it could be the newer snapshot 134 or even a further newer snapshot) then the difference bitmap 126 is created from the difference between the current snapshot 132 and the newer snapshot 134 and merged to the bitmap 124 to start reading from the newer snapshot 134.
- the present disclosure does not jump to a newer snapshot until at least 20 percent of the newer snapshot 134 is not copied and so on for other newer snapshots. Thus, after 10 hopping, the present disclosure does not move any more until 100 percentage of a given newer snapshot is copied. As a result, even if a given current snapshot is not finished copying and a given newer snapshot is created, the present disclosure finally copies the given current snapshot till the end and creates a consistent point in the tertiary storage 108.
- the replication unit 120 is configured to create a bitmap of the size of the current snapshot 132 of the primary volume 110 available in the backup data 112 to replicate the backup data 112 on the secondary storage 106 to the cascaded backup data 114 on the tertiary storage 108.
- the current snapshot 132 is a first snapshot to be copied to the cascaded backup data 114.
- the bitmap of the size of the current snapshot 132 is created using the current snapshot 132.
- the current snapshot 132 is copied to the tertiary storage 108 to enable synchronization of data between the primary storage 104, the secondary storage 106 and the tertiary storage 108.
- the system 102 of the present disclosure provides improved data storage from the secondary storage 106 to the tertiary storage 108 when the current snapshot 132 is being copied to the tertiary storage 108.
- the system 102 enables the cascaded data mover 118 to not move to a newer snapshot 134 before completely coping a certain amount of data from the current snapshot 132.
- the system 102 of the present disclosure is not at a risk of never finishing a process of copying data to the tertiary storage 108.
- the system 102 of the present disclosure reaches to a consistent state from which restoration can be done when needed.
- a risk of data loss in conventional approaches is significantly reduced in the system 102 of the present disclosure.
- FIG. IB is a block diagram that illustrates various exemplary components of a primary storage of the cascaded backup system, in accordance with an embodiment of the present disclosure.
- a block diagram 100B of the primary storage 104 that includes a first processor 140, a first transceiver 142 and a first memory 144.
- the first memory 144 includes the primary volume 110.
- the first processor 140 is configured to receive the data from computing systems such as virtual machines or physical machines.
- the first processor 140 is configured to execute instructions stored in the first memory 144.
- the first processor 140 may be a general-purpose processor.
- Other examples of the first processor 140 may include, but is not limited to a microprocessor, a microcontroller, a complex instruction set computing (CISC) processor, an application-specific integrated circuit (ASIC) processor, a reduced instruction set (RISC) processor, a very long instruction word (VLIW) processor, a central processing unit (CPU), a state machine, a data processing unit, and other processors or control circuitry.
- CISC complex instruction set computing
- ASIC application-specific integrated circuit
- RISC reduced instruction set
- VLIW very long instruction word
- the first transceiver 142 includes suitable logic, circuitry, and interfaces that may be configured to communicate with the computing systems and the secondary storage 106.
- Examples of the first transceiver 142 may include, but is not limited to, an antenna, a radio frequency (RF) transceiver, one or more amplifiers, one or more oscillators, a digital signal processor, a coder- decoder (CODEC) chipset, or a subscriber identity module (SIM) card.
- RF radio frequency
- CODEC coder- decoder
- SIM subscriber identity module
- the first memory 144 includes suitable logic, circuitry, and interfaces that may be configured to store the data received from the computing systems in the primary volume 110 and also store the instructions executable by the first processor 140. Examples of implementation of the first memory 144 may include, but are not limited to, Electrically Erasable Programmable Read- Only Memory (EEPROM), Random Access Memory (RAM), Hard Disk Drive (HDD), Flash memory, Solid-State Drive (SSD), or CPU cache memory.
- EEPROM Electrically Erasable Programmable Read- Only Memory
- RAM Random Access Memory
- HDD Hard Disk Drive
- Flash memory Solid-State Drive
- SSD Solid-State Drive
- FIG. 1C is a block diagram that illustrates various exemplary components of a secondary storage of the cascaded backup system, in accordance with an embodiment of the present disclosure.
- a block diagram lOOC of the secondary storage 106 that includes a second processor 146, a second transceiver 148 and a second memory 150.
- the second memory 150 includes the backup data 112, the data mover 116, the cascaded data mover 118, the bitmap 124, a difference bitmap 126, a merged bitmap 128.
- the primary storage 104, the tertiary storage 108, the first communication network 136 and the second communication network 138 is further shown.
- the second processor 146 is configured to receive the data from primary storage 104 and execute all operations of the secondary storage 106. In an implementation, the second processor 146 is configured to execute instructions stored in the second memory 150. In an example, the second processor 146 may be a general-purpose processor. Other examples of the second processor 146 may include, but is not limited to a microprocessor, a microcontroller, a complex instruction set computing (CISC) processor, an application-specific integrated circuit (ASIC) processor, a reduced instruction set (RISC) processor, a very long instruction word (VLIW) processor, a central processing unit (CPU), a state machine, a data processing unit, and other processors or control circuitry.
- CISC complex instruction set computing
- ASIC application-specific integrated circuit
- RISC reduced instruction set
- VLIW very long instruction word
- the second transceiver 148 includes suitable logic, circuitry, and interfaces that may be configured to communicate with the primary storage 104 and the tertiary storage 108.
- Examples of the second transceiver 148 may include, but is not limited to, an antenna, a radio frequency (RF) transceiver, one or more amplifiers, one or more oscillators, a digital signal processor, a coder-decoder (CODEC) chipset, or a subscriber identity module (SIM) card.
- RF radio frequency
- CODEC coder-decoder
- SIM subscriber identity module
- the second memory 150 includes suitable logic, circuitry, and interfaces that may be configured to store the data received from the primary storage 104 in the backup data 112 and also store the instructions executable by the second processor 146.
- Examples of implementation of the second memory 150 may include, but are not limited to, Electrically Erasable Programmable Read-Only Memory (EEPROM), Random Access Memory (RAM), Hard Disk Drive (HDD), Flash memory, Solid-State Drive (SSD), or CPU cache memory.
- EEPROM Electrically Erasable Programmable Read-Only Memory
- RAM Random Access Memory
- HDD Hard Disk Drive
- Flash memory Solid-State Drive
- SSD Solid-State Drive
- FIG. ID is a block diagram that illustrates various exemplary components of a tertiary storage of the cascaded backup system, in accordance with an embodiment of the present disclosure.
- a block diagram 100D of the tertiary storage 108 that includes a third processor 152, a third transceiver 154 and a third memory 156.
- the third memory 156 includes the cascaded backup data 114.
- the third processor 152 is configured to receive the data from secondary storage 106.
- the third processor 152 is configured to execute instructions stored in the third memory 156.
- the third processor 152 may be a general-purpose processor.
- Other examples of the third processor 152 may include, but is not limited to a microprocessor, a microcontroller, a complex instruction set computing (CISC) processor, an application-specific integrated circuit (ASIC) processor, a reduced instruction set (RISC) processor, a very long instruction word (VLIW) processor, a central processing unit (CPU), a state machine, a data processing unit, and other processors or control circuitry.
- CISC complex instruction set computing
- ASIC application-specific integrated circuit
- RISC reduced instruction set
- VLIW very long instruction word
- CPU central processing unit
- state machine a data processing unit, and other processors or control circuitry.
- the third transceiver 154 includes suitable logic, circuitry, and interfaces that may be configured to communicate with the secondary storage 106.
- Examples of the third transceiver 154 may include, but is not limited to, an antenna, a radio frequency (RF) transceiver, one or more amplifiers, one or more oscillators, a digital signal processor, a coder-decoder (CODEC) chipset, or a subscriber identity module (SIM) card.
- RF radio frequency
- CODEC coder-decoder
- SIM subscriber identity module
- the third memory 156 includes suitable logic, circuitry, and interfaces that may be configured to store the data received from the secondary storage 106 in the cascaded backup data 114 and also store the instructions executable by the third processor 152. Examples of implementation of the third memory 156 may include, but are not limited to, Electrically Erasable Programmable Read-Only Memory (EEPROM), Random Access Memory (RAM), Hard Disk Drive (HDD), Flash memory, Solid-State Drive (SSD), or CPU cache memory.
- FIG. 2 is a flowchart of a method of cascaded backup, in accordance with an embodiment of the present disclosure. With reference to FIG.2 there is shown the method 200. The method 200 is executed at the cascaded backup system described, for example, in Fig. 1 A. The method 200 includes steps 202 to 218.
- the present disclosure provides a method 200 of cascaded backup, comprising: storing data of a computing system in a primary volume 110 on a primary storage 104, replicating, by a data mover 116, the primary volume 110 to a backup data 112 on a secondary storage 106, replicating, by a cascaded data mover 118, the backup data 112 on the secondary storage 106 to a cascaded backup data 114 on a tertiary storage 108, by means of: creating a bitmap 124 of the blocks change between a last snapshot 130 of the primary volume 110 copied to the cascaded backup data 114 and a current snapshot 132 of the primary volume 110 available in the backup data 112, and copying blocks from the current snapshot 132 to the cascaded backup data 114 according to the bitmap 124, and jumping, by the cascaded data mover 118, to a newer snapshot 134 of the primary volume 110 when the newer snapshot 134 is available in the backup data 112 and a certain percent of the current snapshot
- the method 200 comprises storing data of a computing system in a primary volume 110 on a primary storage 104.
- the data of the computing system is stored in the primary storage 104 as a source data which is further backed up to the secondary storage 106 to enable restoring of data whenever needed.
- the computing system herein refers to any virtual machine or physical machine that may store respective memory instructions and logical instructions in the primary storage 104.
- the computing system may provide the data for storage whenever there is any update or change in the data.
- a hypervisor of the virtual machine provides the data for storage in the primary storage 104.
- the primary volume 110 refers to a storage space or a disk of the primary storage 104.
- the method 200 further comprises replicating, by a data mover 116, the primary volume 110 to a backup data 112 on a secondary storage 106.
- the replicating enables the primary volume 110 of the primary storage 104 and the backup data 112 of the secondary storage 106 to be in synchronization i.e., all the data in primary storage 104 is backed up to the secondary storage 106.
- the method 200 comprises reading an entire disk i.e., primary volume 110 and sending all blocks to the secondary storage 106, to write each block into the backup data 112.
- the method 200 enables, transferring all the data to the secondary storage 106 to establish a consistent state with the primary storage 104.
- the data mover 116 comprises backing up all the data of the primary storage 104 to the secondary storage 106 for restoring whenever needed by the primary storage 104.
- the method 200 further comprises replicating, by a cascaded data mover 118, the backup data 112 on the secondary storage 106 to a cascaded backup data 114 on a tertiary storage 108.
- the replicating enables the backup data 112 of the secondary storage 106 and the cascaded backup data 114 of the tertiary storage 108 to be in synchronization i.e., all the data in secondary storage 106 is backed up to the tertiary storage 108.
- the cascaded data mover 118 comprises backing up all the data of the secondary storage 106 to the tertiary storage 108 for recovering whenever needed by the primary storage 104.
- the method 200 further comprises replicating, by means of creating a bitmap 124 of the blocks change between a last snapshot 130 of the primary volume 110 copied to the cascaded backup data 114 and a current snapshot 132 of the primary volume 110 available in the backup data 112.
- the last snapshot 130 includes the data stored in the blocks of the primary volume 110, at a first time, which is already copied to the cascaded backup data 114 to enable recovery of the data whenever needed.
- the current snapshot 132 includes the data stored in the blocks of the primary volume 110, at a second time, which is available for copying to the cascaded backup data 114.
- the last snapshot 130 is taken at a time which is earlier than a time at which the current snapshot 132 is taken.
- the method 200 comprises creating the bitmap 124 which provides information about the blocks which have changed or updated data from the time at which the last snapshot 130 is taken to the time at which the current snapshot 132 is taken.
- the method 200 further comprises replicating, by means of copying blocks from the current snapshot 132 to the cascaded backup data 114 according to the bitmap 124.
- the method 200 comprises, after creating the bitmap 124, copying the blocks having the changed or updated data in the current snapshot 132 to the cascaded backup data 114.
- the tertiary storage 108 is consistent with the secondary storage 106.
- the method 200 further comprises jumping, by the cascaded data mover 118, to a newer snapshot 134 of the primary volume 110 when the newer snapshot 134 is available in the backup data 112 and a certain percent of the current snapshot 132 is copied to the cascaded backup data 114.
- the method 200 comprises the primary storage 104 providing the newer snapshot 134 of the primary volume 110 to the secondary storage 106.
- the newer snapshot 134 includes the data stored in the blocks of the primary volume 110, at a third time.
- the newer snapshot 134 is taken after the current snapshot 132 is taken.
- the newer snapshot 134 may be received by the secondary storage 106 when the current snapshot 132 is being copied to the cascaded backup data 114.
- the method 200 comprises detecting when the newer snapshot 134 is available at the secondary storage 106. This detection may be based on parameters like time, date, name and the like. Further, the method 200 provides instructions to jump to the newer snapshot 134 when a certain percentage of the current snapshot 132 is copied to the cascaded backup data 114. In an example, this percentage may be 10 percent.
- the method 200 further comprises jumping, by means of creating a difference bitmap 126 of the difference between the current snapshot 132 and the newer snapshot 134.
- the method 200 comprises creating the difference bitmap 126 which provides information about the blocks which have changed or updated data from the time at which the current snapshot 132 is taken to the time at which the newer snapshot 134 is taken.
- the method 200 further comprises jumping, by means of merging the difference bitmap 126 with the bitmap 124. After creating the difference bitmap 126, the difference bitmap 126 is merged to the bitmap 124 to form the merged bitmap 128.
- the method 200 further comprises jumping, by means of copying blocks from the newer snapshot 134 according to the merged bitmap 128.
- the blocks from the newer snapshot 134 are copied to the cascaded backup data 114.
- the blocks in the current snapshot 132 which are left to be copied to the cascaded backup data 114 are now replaced with the blocks having changed or updated data from the newer snapshot 134.
- the method 200 in comparison to conventional methods does not have any risk of never finishing a process of copying the data from the secondary storage 106 to the tertiary storage 108 and thus the tertiary storage 108 reaches a consistent state from which restoration can be done.
- the certain percent of the current snapshot 132 is defined by a replication threshold.
- the replication threshold refers to parameter that indicates an amount of data that must be copied from the current snapshot 132 before moving to read from the newer snapshot 134.
- the replication threshold ensures that in comparison to the conventional methods, there is no risk of not finishing the process of copying the data from the secondary storage 106 to the tertiary storage 108.
- the method 200 further comprises increasing the replication threshold each time when the cascaded data mover 118 jumps to a new snapshot until the replication threshold reaches 100 percent.
- an initial value of the replication threshold may be 10 percent and may be increased by 10 percent by the method 200 each time when the cascaded data mover 118 jumps to the new snapshot.
- an initial value of the replication threshold may be 10 percent and may be increased by 5 percent each time.
- an initial value of the replication threshold may be 15 percent and may be increased by 5 percent each time. The replication threshold is increased till 100 percent is reached to enable the tertiary storage 108 to achieve a consistency with the secondary storage 106
- the method 200 further comprises decreasing the replication threshold to an initial value after 100 percent of the current snapshot 132 of the primary volume 110 is copied to the cascaded backup data 114.
- the method 200 comprises decreasing the replication threshold to an initial value of 10 percent after 100 percent of the current snapshot 132 is copied to the cascaded backup data 114.
- the tertiary storage 108 achieves a consistency with the secondary storage 106.
- the replication threshold is decreased after 100 percent is reached to enable copying of data from given newer snapshots after a certain percentage of a given current snapshot is copied.
- the tertiary storage 108 has improved consistency with the secondary storage 106.
- the replication threshold is increased linearly or exponentially.
- the new percentage of the replication threshold after each jump can be larger than the previous one by a constant amount or by a larger difference.
- an initial value of the replication threshold may be 10 percent which may be increased linearly by 10 percent till 100 percent.
- an initial value of the replication threshold may be 10 percent which may be increased exponentially by 5 percent, 15 percent, 30 percent and 40 percent.
- the method 200 further comprises if no snapshot of the primary volume 110 is copied to the cascaded backup data 114, creating a bitmap of the size of the current snapshot 132 of the primary volume 110 available in the backup data 112 to replicate the backup data 112 on the secondary storage 106 to the cascaded backup data 114 on the tertiary storage 108.
- the current snapshot 132 is a first snapshot to be copied to the cascaded backup data 114.
- the method 200 comprises creating the bitmap of the size of the current snapshot 132.
- the current snapshot 132 is copied to the tertiary storage 108 to enable synchronization of data between the primary storage 104, the secondary storage 106 and the tertiary storage 108.
- the method 200 of the present disclosure provides improved data storage from the secondary storage 106 to the tertiary storage 108 when the current snapshot 132 is being copied to the tertiary storage 108.
- the method 200 enables the cascaded data mover 118 to not move to a newer snapshot 134 before completely coping a certain amount of data from the current snapshot 132.
- the method 200 of the present disclosure is not at a risk of never finishing a process of copying data to the tertiary storage 108.
- the method 200 of the present disclosure reaches to a consistent state from which restoration can be done when needed.
- a risk of data loss in conventional approaches is significantly reduced in the method 200 of the present disclosure.
- a computer program product comprising a non-transitory computer-readable storage medium having computer program code stored thereon, the computer program code being executable by a processor to execute the method 200.
- a computer program is provided to execute the method 200.
- Examples of implementation of the non-transitory computer-readable storage medium include, but is not limited to, Electrically Erasable Programmable Read-Only Memory (EEPROM), Random Access Memory (RAM), Read Only Memory (ROM), Hard Disk Drive (HDD), Flash memory, a Secure Digital (SD) card, Solid-State Drive (SSD), a computer readable storage medium, and/or CPU cache memory.
- EEPROM Electrically Erasable Programmable Read-Only Memory
- RAM Random Access Memory
- ROM Read Only Memory
- HDD Hard Disk Drive
- Flash memory Flash memory
- SD Secure Digital
- SSD Solid-State Drive
- a computer readable storage medium for providing a non-transient memory may include, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage
- FIG. 3 is a block diagram of a cascaded backup system that depicts data transfer from a primary storage to a secondary storage and a tertiary storage, in accordance with an embodiment of the present disclosure.
- a block diagram 300 of a cascaded backup system 302 that includes a secondary protection environment 304, a hypervisor 306, a virtual machine 308, a disk 310, a first snapshot 312A, a second snapshot 314B, a copied first snapshot 312B, a copied second snapshot 314B, a copy 316 and a difference bitmap 318.
- the primary storage 104 a tertiary storage 108.
- backup data 112 the cascaded backup data 114, the data mover 116, the cascaded data mover 118.
- the secondary protection environment 304 may also be referred to as the secondary storage 106 of FIG. 1 A.
- the hypervisor 306 includes suitable logic, circuitry, and interfaces that may be configured to run virtual machines such as virtual machine 308.
- the disk 310 of the primary storage 104 may also be referred to as the primary volume 110 of FIG. 1 A which is configured to store data received from the virtual machine 308.
- the first snapshot 312A of the disk 310 may also be referred to as the current snapshot 132 of FIG. 1 A.
- the second snapshot 314A of the disk 310 may also be referred to as the newer snapshot 134 of FIG. 1A.
- the copied first snapshot 312B is the first snapshot 312A and the copied second snapshot 314B is the second snapshot 314B copied to the backup data 112.
- the blocks of the first snapshot 312B and the second snapshot 314B are stored in the copy 316.
- the cascaded backup system 302 is configured to create a bitmap (not shown) of the size of the first snapshot 312A of the disk 310 available in the backup data 112 to replicate the backup data 112 to the cascaded backup data 114 on the tertiary storage 108. Further, when the second snapshot 314A of the disk 310 is available in the backup data 112 the cascaded backup system 302 is configured to jump to the second snapshot 314A when a certain percent of the first snapshot 312A is copied to the cascaded backup data 114. The cascaded backup system 302 is configured to create a difference bitmap 318 of the difference between the first snapshot 312A and the second snapshot 314A. The cascaded backup system 302 is further configured to merge the difference bitmap 318 with the bitmap (not shown) and copy blocks from the second snapshot 314A according to the merged bitmap.
- FIG. 4 is a timing diagram of a cascaded backup system that depicts data transfer from a primary storage to a secondary storage and a tertiary storage, in accordance with an embodiment of the present disclosure.
- a block diagram 400 that includes a replication threshold value 402, a defined location 404, a zeroth snapshot 406, a first snapshot 408, a second snapshot 410, a third snapshot 412, a fourth snapshot 414, a zeroth bitmap 416, a first bitmap 418, a second bitmap 420, a third bitmap 422, a fourth bitmap 424, a fifth bitmap 426, a sixth bitmap 428, a seventh bitmap 430 and an eighth bitmap 432, a zeroth replication disk 434, a first replication disk 436, a second replication disk 438, a third replication disk 440.
- the zeroth replication disk 434, the first replication disk 436, the second replication disk 438, the third replication disk 440 may also be referred to as the cascaded backup data 114 of FIG. 1 A at different stages of data transfer from the secondary storage 106 to tertiary storage 108.
- a new snapshot such as a zeroth snapshot 406 available at the secondary storage 106. Since this is a very first snapshot, a zeroth bitmap 416 in the size of the zeroth snapshot 406 is created where all its bits are ON, and started to copy the data to the zeroth replication disk 434. Further, for each block copied, the represented bit is turned OFF in the zeroth bitmap 416.
- the replication threshold at this step is set to 20%.
- a new snapshot such as first snapshot 408 is available the secondary storage 106. Now, more than 20% of the zeroth snapshot 406 is previously copied to the tertiary storage 108, so the present disclosure can jump and read from the first snapshot 408.
- a first bitmap 418 of the difference between zeroth snapshot 406 and first snapshot 408 is created and merged with the zeroth bitmap 416.
- the new bitmap such as a second bitmap 420 has less than 25% bits OFF (because of the merge). Further, the present disclosure, starts reading using the second bitmap 420 from the first snapshot 408 and increases the replication threshold to 50%.
- a new snapshot such as a second snapshot 410 is available.
- the present disclosure copied from the secondary storage 106 to the first replication disk 436 only 40% of the first snapshot 408, so the present disclosure cannot yet move to read from the second snapshot 410.
- the present disclosure continues to read from the first snapshot 408.
- the third bitmap 422 has 40% bits off.
- a new snapshot such as a third snapshot 412 is available.
- the present disclosure didn’t copy from the secondary storage 106 to the second replication disk 438 more than 50% so the present disclosure continues to read from the first snapshot 408.
- the fourth bitmap 424 has 48% bits off.
- a 50% of the first snapshot 408 is reached so the present disclosure can jump to the third snapshot 412. Further, a bitmap such as the fifth bitmap 426 of the difference between the first snapshot 408 and the third snapshot 412 is created and merged it into current bitmap such as the fourth bitmap 424. Now the new bitmap such as the sixth bitmap 428 has less than 50% bits OFF (because of the merge). Now, the replication threshold is increased to 100%.
- a new snapshot such as a fourth snapshot 414 is available. This snapshot is ignored currently as 100% of the third snapshot 412 is not copied.
- the seventh bitmap 430 has 90% bits off.
- the present disclosure copied 100% of the third snapshot 412 to the third replication disk 440, and now a consistent checkpoint on the tertiary storage 108 is created that can be restored when needed. Further, the replication threshold is decreased back to its initial value of 20%. Further, a bitmap such as the eighth bitmap 432 between the third snapshot 412 and the latest snapshot such as the fourth snapshot 414 is created and copied from the secondary storage 106 to the third replication disk 440 according to this eighth bitmap 432.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Quality & Reliability (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
A cascaded data mover for cascaded backup system, includes primary storage having primary volume, secondary storage having backup data, tertiary storage having cascaded backup data and data mover. The cascaded data mover includes replication unit configured to replicate, by means of: creating bitmap of blocks change between last snapshot of primary volume and current snapshot available in backup data, and copying blocks from current snapshot to cascaded backup data according to bitmap. The cascaded data mover further includes control unit that instructs replication unit to jump to newer snapshot when certain percent of current snapshot is copied to cascaded backup data, by means of creating a difference bitmap, merging difference bitmap with bitmap, and copying blocks from newer snapshot according to merged bitmap. The system provides improved data storage by not moving to newer snapshot before completely coping certain amount of data from current snapshot.
Description
CASCADED DATA MOVER FOR CASCADED BACKUP SYSTEM AND METHOD
OF CASCADED BACKUP
TECHNICAL FIELD
The present disclosure relates generally to the field of data protection and backup; and more specifically, to a cascaded data mover for a cascaded backup system and a method of cascaded backup.
BACKGROUND
Generally, data is backed up from a primary storage to a secondary storage. The backed up data may then be transferred to a recovery site, such as a tertiary storage, to protect and recover data in an event of data loss in the secondary storage. Examples of the event of data loss may include, but is not limited to, data corruption, hardware or software failure, accidental deletion of data, hacking, or malicious attack. For safety reasons, a separate recovery site is generally extensively used to store a copy of the data present in the secondary storage. Such primary, secondary, and tertiary storages may collectively be referred to as a cascaded CDP system. In the cascaded CDP system, all the data in the primary storage, being a protected item (e.g., virtual machine), is backed up to the secondary storage whenever any change is made in the data, and the data from the secondary storage is replicated to the tertiary storage. For example, in a replication of a given virtual machine, the blocks written on the primary storage are backed up to the secondary storage, and so on, continuously.
The primary, secondary, and tertiary storages usually need to be synchronized in a shortest time possible to prevent any data loss due to a long time taken usually for synchronization. The backup data is generally provided by the primary storage to secondary storage in a quick time. Thus, there may be many backup points created in the secondary storage. Generally, a first transfer of a snapshot of backup data to the tertiary storage can take much longer time than to transfer from the primary storage to the secondary storage due to bandwidth limitations. In an example, when a first snapshot is available on the secondary storage, copying the first snapshot
to the tertiary storage can then be started. During a copy of this snapshot from secondary to tertiary storage, a new snapshot may be available on the secondary storage. In such a case, conventional approaches may finish copying the current snapshot i.e., not reading from the new snapshot when exist. Thus, a consistent point in time is created when the copying is finished, but this may not be updated to the newest snapshot available. Other conventional approaches create a bitmap of block changes in the new snapshot and add the changes to a current bitmap (i.e., of current snapshot). Thus, this conventional approach reads from the new snapshot whenever the new snapshot is available and when the copying is finished, the tertiary storage is updated to the newest snapshot available. However, there is always a risk of never finishing the aforesaid process of copying data to the tertiary storage and never reaching to a consistent state. Thus, this is a prominent technical challenge as there may be a case of data loss due to this inconsistency.
Therefore, in light of the foregoing discussion, there exists a need to overcome the aforementioned drawbacks associated with conventional data backup having cascaded backup systems.
SUMMARY
The present disclosure provides a cascaded data mover for a cascaded backup system and a method of cascaded backup. The present disclosure provides a solution to the existing problem of a risk of data loss and a long wait time period for a tertiary storage when a snapshot is being copied from a secondary storage to the tertiary storage and another (new) snapshot is available at the secondary storage for copying to the tertiary storage. This long waiting time period increases the risk of data loss. An aim of the present disclosure is to provide a solution that overcomes at least partially the problems encountered in prior art, and provide an improved cascaded backup system and method of cascaded backup which enables switching (i.e., jumping) to a new snapshot while copying a current snapshot from the secondary storage to the tertiary storage only when a defined threshold percentage of data has been copied from the current snapshot.
One or more objects of the present disclosure is achieved by the solutions provided in the enclosed independent claims. Advantageous implementations of the present disclosure are further defined in the dependent claims.
In one aspect, the present disclosure provides a cascaded data mover for a cascaded backup system, the cascaded backup system comprising a primary storage with a primary volume for storing data of a computing system, a secondary storage for storing a backup data of the primary volume obtained from the primary storage, a tertiary storage for storing a cascaded backup data of the primary volume obtained from the secondary storage, and a data mover for replicating the primary volume to the backup data on the secondary storage, the cascaded data mover comprising: a replication unit configured to replicate the backup data on the secondary storage to the cascaded backup data on the tertiary storage, by means of: creating a bitmap of the blocks change between a last snapshot of the primary volume copied to the cascaded backup data and a current snapshot of the primary volume available in the backup data, and copying blocks from the current snapshot to the cascaded backup data according to the bitmap, and a control unit configured to determine when a newer snapshot of the primary volume is available in the backup data and instruct the replication unit to jump to the newer snapshot when a certain percent of the current snapshot is copied to the cascaded backup data, by means of: creating a difference bitmap of the difference between the current snapshot and the newer snapshot, merging the difference bitmap with the bitmap, and copying blocks from the newer snapshot according to the merged bitmap.
The system of the present disclosure provides improved data storage from the secondary storage to the tertiary storage when the current snapshot is copied to the tertiary storage. The system ensures that the cascaded data mover do not move to a newer snapshot before completely coping a certain amount of data from the current snapshot. In comparison to conventional systems, the system of the present disclosure is not at a risk of never finishing a process of copying data to the tertiary storage. Thus, the cascaded data mover of the system of the present disclosure reaches to a consistent state from which restoration can be done when needed. Further, as the tertiary storage is updated to the newer snapshot after certain amount of data from the current snapshot is copied, a risk of data loss in conventional approaches is significantly reduced in the system of the present disclosure.
In an implementation form, the certain percent of the current snapshot is defined by a replication threshold.
The replication threshold ensures that in comparison to the conventional approach, there is no risk of not finishing the process of copying the data from the secondary storage to the tertiary storage.
In a further implementation form, the replication threshold is increased each time when the cascaded data mover jumps to a new snapshot until the replication threshold reaches 100 percent.
The replication threshold is increased till 100 percent is reached to enable the tertiary storage to achieve a consistency with the secondary storage.
In a further implementation form, the replication threshold is decreased to an initial value after 100 percent of the current snapshot of the primary volume is copied to the cascaded backup data.
The replication threshold is decreased after 100 percent is reached to enable copying of data from given newer snapshots after a certain percentage of a given current snapshot is copied. Thus, the tertiary storage has improved consistency with the secondary storage.
In a further implementation form, if no snapshot of the primary volume is copied to the cascaded backup data, the replication unit is configured to create a bitmap of the size of the current snapshot of the primary volume available in the backup data to replicate the backup data on the secondary storage to the cascaded backup data on the tertiary storage.
Thus, when the current snapshot is the first snapshot to be copied, the bitmap enables the current snapshot to be copied to the tertiary storage to enable synchronization of data between the primary storage, the secondary storage and the tertiary storage.
In another aspect, the present disclosure provides a method of cascaded backup, comprising: storing data of a computing system in a primary volume on a primary storage, replicating, by a data mover, the primary volume to a backup data on a secondary storage, replicating, by a cascaded data mover, the backup data on the secondary storage to a cascaded backup data on a tertiary storage, by means of: creating a bitmap of the blocks change between a last snapshot of the primary volume copied to the cascaded backup data and a current snapshot of the primary volume available in the backup data, and copying blocks from the current snapshot to the cascaded backup data according to the bitmap, and jumping, by the cascaded data mover, to a newer snapshot of the primary volume when the newer snapshot is available in the backup data and a certain percent of the current snapshot is copied to the cascaded backup data, by means of: creating a difference bitmap of the difference between the current snapshot and the newer
snapshot, merging the difference bitmap with the bitmap, and copying blocks from the newer snapshot according to the merged bitmap.
The method achieves all the advantages and effects of the method of the present disclosure.
It is to be appreciated that all the aforementioned implementation forms can be combined. It has to be noted that all devices, elements, circuitry, units and means described in the present application could be implemented in the software or hardware elements or any kind of combination thereof. All steps which are performed by the various entities described in the present application as well as the functionalities described to be performed by the various entities are intended to mean that the respective entity is adapted to or configured to perform the respective steps and functionalities. Even if, in the following description of specific embodiments, a specific functionality or step to be performed by external entities is not reflected in the description of a specific detailed element of that entity which performs that specific step or functionality, it should be clear for a skilled person that these methods and functionalities can be implemented in respective software or hardware elements, or any kind of combination thereof. It will be appreciated that features of the present disclosure are susceptible to being combined in various combinations without departing from the scope of the present disclosure as defined by the appended claims.
Additional aspects, advantages, features and objects of the present disclosure would be made apparent from the drawings and the detailed description of the illustrative implementations construed in conjunction with the appended claims that follow.
BRIEF DESCRIPTION OF THE DRAWINGS
The summary above, as well as the following detailed description of illustrative embodiments, is better understood when read in conjunction with the appended drawings. For the purpose of illustrating the present disclosure, exemplary constructions of the disclosure are shown in the drawings. However, the present disclosure is not limited to specific methods and instrumentalities disclosed herein. Moreover, those in the art will understand that the drawings are not to scale. Wherever possible, like elements have been indicated by identical numbers.
Embodiments of the present disclosure will now be described, by way of example only, with reference to the following diagrams wherein:
FIG. 1 A is a block diagram of a cascaded backup system, in accordance with an embodiment of the present disclosure;
FIG. IB is a block diagram that illustrates various exemplary components of a primary storage of the cascaded backup system, in accordance with an embodiment of the present disclosure;
FIG. 1C is a block diagram that illustrates various exemplary components of a secondary storage of the cascaded backup system, in accordance with an embodiment of the present disclosure;
FIG. ID is a block diagram that illustrates various exemplary components of a tertiary storage of the cascaded backup system, in accordance with an embodiment of the present disclosure;
FIG. 2 is a flowchart of a method of cascaded backup, in accordance with an embodiment of the present disclosure;
FIG. 3 is a block diagram of a cascaded backup system that depicts data transfer from a primary storage to a secondary storage and a tertiary storage, in accordance with an embodiment of the present disclosure; and
FIG. 4 is a timing diagram of a cascaded backup system that depicts data transfer from a primary storage to a secondary storage and a tertiary storage, in accordance with an embodiment of the present disclosure.
In the accompanying drawings, an underlined number is employed to represent an item over which the underlined number is positioned or an item to which the underlined number is adjacent. A non-underlined number relates to an item identified by a line linking the non- underlined number to the item. When a number is non-underlined and accompanied by an associated arrow, the non-underlined number is used to identify a general item at which the arrow is pointing.
DETAILED DESCRIPTION OF EMBODIMENTS
The following detailed description illustrates embodiments of the present disclosure and ways in which they can be implemented. Although some modes of carrying out the present disclosure have been disclosed, those skilled in the art would recognize that other embodiments for carrying out or practicing the present disclosure are also possible.
FIG. 1 A is a block diagram of a cascaded backup system, in accordance with an embodiment of the present disclosure. With reference to FIG. 1 A, there is shown a block diagram 100A of a cascaded backup system 102 that includes a primary storage 104, a secondary storage 106, a tertiary storage 108. There is shown a primary volume 110, a backup data 112, a cascaded backup data 114, a data mover 116, a cascaded data mover 118, a replication unit 120, a control unit 122, a bitmap 124, a difference bitmap 126, a merged bitmap 128, a last snapshot 130, a current snapshot 132 and a newer snapshot 134. There is further shown a first communication network 136 and a second communication network 138. In an implementation, the units, such as the replication unit 120 and the control unit 122 may also be referred to as a replication circuit and a control circuit, respectively.
In one aspect, the present disclosure provides a cascaded data mover 118 for a cascaded backup system 102, the cascaded backup system 102 comprising a primary storage 104 with a primary volume 110 for storing data of a computing system, a secondary storage 106 for storing a backup data 112 of the primary volume 110 obtained from the primary storage 104, a tertiary storage 108 for storing a cascaded backup data 114 of the primary volume 110 obtained from the secondary storage 106, and a data mover 116 for replicating the primary volume 110 to the backup data 112 on the secondary storage 106, the cascaded data mover 118 comprising: a replication unit 120 configured to replicate the backup data 112 on the secondary storage 106 to the cascaded backup data 114 on the tertiary storage 108, by means of: creating a bitmap 124 of the blocks change between a last snapshot 130 of the primary volume 110 copied to the cascaded backup data 114 and a current snapshot 132 of the primary volume 110 available in the backup data 112, and copying blocks from the current snapshot 132 to the cascaded backup data 114 according to the bitmap 124, and
a control unit 122 configured to determine when a newer snapshot 134 of the primary volume 110 is available in the backup data 112 and instruct the replication unit 120 to jump to the newer snapshot 134 when a certain percent of the current snapshot 132 is copied to the cascaded backup data 114, by means of: creating a difference bitmap 126 of the difference between the current snapshot 132 and the newer snapshot 134, merging the difference bitmap 126 with the bitmap 124, and copying blocks from the newer snapshot 134 according to the merged bitmap
128
The primary storage 104 comprises the primary volume 110 for storing data of the computing system. The primary storage 104 includes suitable logic, circuitry, and interfaces that may be configured to store data from the computing systems such as virtual machines or physical machines. In an example, the primary storage 104 may be part of the computing system. The data received from the primary storage 104 is stored in the primary volume 110 such as a storage disk. Examples of implementation of the primary storage 104 may include, but are not limited to, a production environment storage, a data storage server or a datacentre that include one or more hard disk drives, often arranged into logical, redundant storage containers or Redundant Array of Inexpensive Disks (RAID).
The secondary storage 106 is configured for storing the backup data 112 of the primary volume 110 obtained from the primary storage 104 and the bitmap 124, the difference bitmap 126, the merged bitmap 128, the last snapshot 130, the current snapshot 132 and the newer snapshot 134. The secondary storage 106 includes suitable logic, circuitry, and interfaces that may be configured to back-up the data stored in the primary storage 104 for recovery whenever needed. The secondary storage 106 stores the data received from the primary storage 104 in the backup data 112. In an example, the secondary storage 106 may be a Network Attached Storage (NAS). Examples of implementation of the secondary storage 106 may include, but are not limited to, block-based storages, storage arrays and the like.
The tertiary storage 108 is configured for storing the cascaded backup data 114 of the primary volume 110 obtained from the secondary storage 106. The tertiary storage 108 includes suitable logic, circuitry, and interfaces that may be configured to store the back-up data of the secondary
storage 106 for restore in case of data loss at the secondary storage 106. The tertiary storage 108 stores the data received from the secondary storage 106 in the cascaded backup data 114. In an example, the tertiary storage 108 may be a cloud based storage, storage arrays and the like.
The data mover 116 includes suitable logic, circuitry, and interfaces that may be configured to enable replicating the primary volume 110 to the backup data 112 on the secondary storage 106 for establishing a synchronization between the primary storage 104 and the secondary storage 106. In an example, the data mover 116 may be implemented as a software module or circuitry.
The cascaded data mover 118 includes suitable logic, circuitry, and interfaces that may be configured to enable storing of data in the tertiary storage 108 from the secondary storage 106 for establishing a synchronization between the secondary storage 106 and the tertiary storage 108. In an example, the cascaded data mover 118 may be implemented as a software module or circuitry.
The first communication network 136 includes a medium (e.g., a communication channel) through which the primary storage 104 communicates the secondary storage 106. The second communication network 138 includes a medium (e.g., a communication channel) through which the secondary storage 106 communicates the tertiary storage 108. The first communication network 136 and the second communication network 138 may be a wired or wireless communication network. Examples of the first communication network 136 and the second communication network 138 may include, but are not limited to, a Wireless Fidelity (Wi-Fi) network, a Local Area Network (LAN), a wireless personal area network (WPAN), a Wireless Local Area Network (WLAN), a wireless wide area network (WWAN), a cloud network, a Long Term Evolution (LTE) network, a Metropolitan Area Network (MAN), or the Internet.
In operation, the replication unit 120 of the cascaded data mover 118 is configured to replicate the backup data 112 on the secondary storage 106 to the cascaded backup data 114 on the tertiary storage 108, by means of: creating a bitmap 124 of the blocks change between a last snapshot 130 of the primary volume 110 copied to the cascaded backup data 114 and a current snapshot 132 of the primary volume 110 available in the backup data 112, and copying blocks from the current snapshot 132 to the cascaded backup data 114 according to the bitmap 124. The last snapshot 130 includes the data stored in the blocks of the primary volume 110, at a
first time, which is already copied to the cascaded backup data 114 to enable recovery of the data whenever needed. The current snapshot 132 includes the data stored in the blocks of the primary volume 110, at a second time, which is available for copying to the cascaded backup data 114. The first time is before the second time i.e., the last snapshot 130 is taken at a time which is earlier than a time at which the current snapshot 132 is taken. The replication unit 120 is configured to create the bitmap 124 which provides information about the blocks which have changed or updated data from the time at which the last snapshot 130 is taken to the time at which the current snapshot 132 is taken. As, the primary volume 110 is connected to the computing system, the blocks in the primary volume 110 may keep changing based on data provided by the computing system. Further, after creating the bitmap 124, the blocks having the changed or updated data in the current snapshot 132 are copied to the cascaded backup data 114. Thus, by virtue of copying the current snapshot 132, the tertiary storage 108 is consistent with the secondary storage 106.
The control unit 122 of the cascaded data mover 118 is configured to determine when a newer snapshot 134 of the primary volume 110 is available in the backup data 112 and instruct the replication unit 120 to jump to the newer snapshot 134 when a certain percent of the current snapshot 132 is copied to the cascaded backup data 114, by means of: creating a difference bitmap 126 of the difference between the current snapshot 132 and the newer snapshot 134, merging the difference bitmap 126 with the bitmap 124, and copying blocks from the newer snapshot 134 according to the merged bitmap 128. The primary storage 104 is configured to provide the newer snapshot 134 of the primary volume 110 to the secondary storage 106. The newer snapshot 134 includes the data stored in the blocks of the primary volume 110, at a third time. The third time is after the second time i.e., the newer snapshot 134 is taken after the current snapshot 132 is taken. The newer snapshot 134 may be received by the secondary storage 106 when the current snapshot 132 is being copied to the cascaded backup data 114. The control unit 122 detects when the newer snapshot 134 is available at the secondary storage 106. This detection may be based on parameters like time, date, name and the like. Further, the control unit 122 provides instructions to the replication unit 120 to jump to the newer snapshot 134 when a certain percentage of the current snapshot 132 is copied to the cascaded backup data 114. In an example, this percentage may be 10 percent. The replication unit 120 is configured to create the difference bitmap 126 which provides information about the blocks which have changed or updated data from the time at which the current snapshot 132 is taken to the time at which the newer snapshot 134 is taken. Further, after creating the difference
bitmap 126, the difference bitmap 126 is merged to the bitmap 124 and blocks from the newer snapshot 134 are copied to the cascaded backup data 114. Thus, the blocks in the current snapshot 132 which are left to be copied to the cascaded backup data 114 are now replaced with the blocks having changed or updated data from the newer snapshot 134. Beneficially, the system 102 in comparison to conventional systems does not have any risk of never finishing a process of copying the data from the secondary storage 106 to the tertiary storage 108 and thus the tertiary storage 108 reaches a consistent state from which restoration can be done.
According to an embodiment, the certain percent of the current snapshot 132 is defined by a replication threshold. The replication threshold refers to parameter that indicates an amount of data that must be copied from the current snapshot 132 before moving to read from the newer snapshot 134. The replication threshold ensures that in comparison to the conventional approach, there is no risk of not finishing the process of copying the data from the secondary storage 106 to the tertiary storage 108.
According to an embodiment, the replication threshold is increased each time when the cascaded data mover 118 jumps to a new snapshot until the replication threshold reaches 100 percent. In an example, an initial value of the replication threshold may be 10 percent and may be increased by 10 percent each time when the cascaded data mover 118 jumps to the new snapshot. In another example, an initial value of the replication threshold may be 10 percent and may be increased by 5 percent each time. In another example, an initial value of the replication threshold may be 15 percent and may be increased by 5 percent each time. The replication threshold is increased till 100 percent is reached to enable the tertiary storage 108 to achieve a consistency with the secondary storage 106.
According to an embodiment, the replication threshold is decreased to an initial value after 100 percent of the current snapshot 132 of the primary volume 110 is copied to the cascaded backup data 114. In an example, the replication threshold is decreased to an initial value of 10 percent after 100 percent of the current snapshot 132 is copied to the cascaded backup data 114. At 100 percent the tertiary storage 108 achieves a consistency with the secondary storage 106. The replication threshold is decreased after 100 percent is reached to enable copying of data from given newer snapshots after a certain percentage of a given current snapshot is copied. Thus, the tertiary storage 108 has improved consistency with the secondary storage 106.
According to an embodiment, the replication threshold is increased linearly or exponentially. The new percentage of the replication threshold after each jump can be larger than the previous one by a constant amount or by a larger difference. In an example, an initial value of the replication threshold may be 10 percent which may be increased linearly by 10 percent till 100 percent. In another example, an initial value of the replication threshold may be 10 percent which may be increased exponentially by 5 percent, 15 percent, 30 percent and 40 percent.
In an example, the aforesaid replication threshold may be ten percent. Thus, the present disclosure starts by coping the current snapshot 132 and does not move to newer snapshot 134 until at least 10% of current snapshot 132 is copied, even if the newer snapshot 134 is available before 10 percent of copying of the current snapshot 132. Further, after coping at least 10 percent of the current snapshot 132, if a new snapshot exists (it could be the newer snapshot 134 or even a further newer snapshot) then the difference bitmap 126 is created from the difference between the current snapshot 132 and the newer snapshot 134 and merged to the bitmap 124 to start reading from the newer snapshot 134. Further, the present disclosure does not jump to a newer snapshot until at least 20 percent of the newer snapshot 134 is not copied and so on for other newer snapshots. Thus, after 10 hopping, the present disclosure does not move any more until 100 percentage of a given newer snapshot is copied. As a result, even if a given current snapshot is not finished copying and a given newer snapshot is created, the present disclosure finally copies the given current snapshot till the end and creates a consistent point in the tertiary storage 108.
According to an embodiment, if no snapshot of the primary volume 110 is copied to the cascaded backup data 114, the replication unit 120 is configured to create a bitmap of the size of the current snapshot 132 of the primary volume 110 available in the backup data 112 to replicate the backup data 112 on the secondary storage 106 to the cascaded backup data 114 on the tertiary storage 108. In such a case, the current snapshot 132 is a first snapshot to be copied to the cascaded backup data 114. Thus, the bitmap of the size of the current snapshot 132 is created using the current snapshot 132. Thus, the current snapshot 132 is copied to the tertiary storage 108 to enable synchronization of data between the primary storage 104, the secondary storage 106 and the tertiary storage 108.
The system 102 of the present disclosure provides improved data storage from the secondary storage 106 to the tertiary storage 108 when the current snapshot 132 is being copied to the tertiary storage 108. The system 102 enables the cascaded data mover 118 to not move to a
newer snapshot 134 before completely coping a certain amount of data from the current snapshot 132. In comparison to conventional systems, the system 102 of the present disclosure is not at a risk of never finishing a process of copying data to the tertiary storage 108. Thus, the system 102 of the present disclosure reaches to a consistent state from which restoration can be done when needed. Further, as the tertiary storage 108 is updated to the newer snapshot 134 after certain amount of data from the current snapshot 132 is copied, a risk of data loss in conventional approaches is significantly reduced in the system 102 of the present disclosure.
FIG. IB is a block diagram that illustrates various exemplary components of a primary storage of the cascaded backup system, in accordance with an embodiment of the present disclosure. With reference to FIG. IB, there is shown a block diagram 100B of the primary storage 104 that includes a first processor 140, a first transceiver 142 and a first memory 144. The first memory 144 includes the primary volume 110. There is further shown the secondary storage 106, the tertiary storage 108, the data mover 116, the cascaded data mover 118, the bitmap 124, a difference bitmap 126, a merged bitmap 128, the first communication network 136 and the second communication network 138.
The first processor 140 is configured to receive the data from computing systems such as virtual machines or physical machines. In an implementation, the first processor 140 is configured to execute instructions stored in the first memory 144. In an example, the first processor 140 may be a general-purpose processor. Other examples of the first processor 140 may include, but is not limited to a microprocessor, a microcontroller, a complex instruction set computing (CISC) processor, an application-specific integrated circuit (ASIC) processor, a reduced instruction set (RISC) processor, a very long instruction word (VLIW) processor, a central processing unit (CPU), a state machine, a data processing unit, and other processors or control circuitry.
The first transceiver 142 includes suitable logic, circuitry, and interfaces that may be configured to communicate with the computing systems and the secondary storage 106. Examples of the first transceiver 142 may include, but is not limited to, an antenna, a radio frequency (RF) transceiver, one or more amplifiers, one or more oscillators, a digital signal processor, a coder- decoder (CODEC) chipset, or a subscriber identity module (SIM) card.
The first memory 144 includes suitable logic, circuitry, and interfaces that may be configured to store the data received from the computing systems in the primary volume 110 and also store the instructions executable by the first processor 140. Examples of implementation of the first
memory 144 may include, but are not limited to, Electrically Erasable Programmable Read- Only Memory (EEPROM), Random Access Memory (RAM), Hard Disk Drive (HDD), Flash memory, Solid-State Drive (SSD), or CPU cache memory.
FIG. 1C is a block diagram that illustrates various exemplary components of a secondary storage of the cascaded backup system, in accordance with an embodiment of the present disclosure. With reference to FIG. 1C, there is shown a block diagram lOOC of the secondary storage 106 that includes a second processor 146, a second transceiver 148 and a second memory 150. The second memory 150 includes the backup data 112, the data mover 116, the cascaded data mover 118, the bitmap 124, a difference bitmap 126, a merged bitmap 128. There is further shown the primary storage 104, the tertiary storage 108, the first communication network 136 and the second communication network 138.
The second processor 146 is configured to receive the data from primary storage 104 and execute all operations of the secondary storage 106. In an implementation, the second processor 146 is configured to execute instructions stored in the second memory 150. In an example, the second processor 146 may be a general-purpose processor. Other examples of the second processor 146 may include, but is not limited to a microprocessor, a microcontroller, a complex instruction set computing (CISC) processor, an application-specific integrated circuit (ASIC) processor, a reduced instruction set (RISC) processor, a very long instruction word (VLIW) processor, a central processing unit (CPU), a state machine, a data processing unit, and other processors or control circuitry.
The second transceiver 148 includes suitable logic, circuitry, and interfaces that may be configured to communicate with the primary storage 104 and the tertiary storage 108. Examples of the second transceiver 148 may include, but is not limited to, an antenna, a radio frequency (RF) transceiver, one or more amplifiers, one or more oscillators, a digital signal processor, a coder-decoder (CODEC) chipset, or a subscriber identity module (SIM) card.
The second memory 150 includes suitable logic, circuitry, and interfaces that may be configured to store the data received from the primary storage 104 in the backup data 112 and also store the instructions executable by the second processor 146. Examples of implementation of the second memory 150 may include, but are not limited to, Electrically Erasable Programmable Read-Only Memory (EEPROM), Random Access Memory (RAM), Hard Disk Drive (HDD), Flash memory, Solid-State Drive (SSD), or CPU cache memory. In an example, the data mover
116 and the cascaded data mover 118 may be present inside the second memory 150. In another example, the data mover 116 and the cascaded data mover 118 may be present outside the second memory 150.
FIG. ID is a block diagram that illustrates various exemplary components of a tertiary storage of the cascaded backup system, in accordance with an embodiment of the present disclosure. With reference to FIG. ID, there is shown a block diagram 100D of the tertiary storage 108 that includes a third processor 152, a third transceiver 154 and a third memory 156. The third memory 156 includes the cascaded backup data 114. There is further shown the primary storage 104, the secondary storage 106, the data mover 116, the cascaded data mover 118, the bitmap 124, a difference bitmap 126, a merged bitmap 128, the first communication network 136 and the second communication network 138.
The third processor 152 is configured to receive the data from secondary storage 106. In an implementation, the third processor 152 is configured to execute instructions stored in the third memory 156. In an example, the third processor 152 may be a general-purpose processor. Other examples of the third processor 152 may include, but is not limited to a microprocessor, a microcontroller, a complex instruction set computing (CISC) processor, an application-specific integrated circuit (ASIC) processor, a reduced instruction set (RISC) processor, a very long instruction word (VLIW) processor, a central processing unit (CPU), a state machine, a data processing unit, and other processors or control circuitry.
The third transceiver 154 includes suitable logic, circuitry, and interfaces that may be configured to communicate with the secondary storage 106. Examples of the third transceiver 154 may include, but is not limited to, an antenna, a radio frequency (RF) transceiver, one or more amplifiers, one or more oscillators, a digital signal processor, a coder-decoder (CODEC) chipset, or a subscriber identity module (SIM) card.
The third memory 156 includes suitable logic, circuitry, and interfaces that may be configured to store the data received from the secondary storage 106 in the cascaded backup data 114 and also store the instructions executable by the third processor 152. Examples of implementation of the third memory 156 may include, but are not limited to, Electrically Erasable Programmable Read-Only Memory (EEPROM), Random Access Memory (RAM), Hard Disk Drive (HDD), Flash memory, Solid-State Drive (SSD), or CPU cache memory.
FIG. 2 is a flowchart of a method of cascaded backup, in accordance with an embodiment of the present disclosure. With reference to FIG.2 there is shown the method 200. The method 200 is executed at the cascaded backup system described, for example, in Fig. 1 A. The method 200 includes steps 202 to 218.
In another aspect, the present disclosure provides a method 200 of cascaded backup, comprising: storing data of a computing system in a primary volume 110 on a primary storage 104, replicating, by a data mover 116, the primary volume 110 to a backup data 112 on a secondary storage 106, replicating, by a cascaded data mover 118, the backup data 112 on the secondary storage 106 to a cascaded backup data 114 on a tertiary storage 108, by means of: creating a bitmap 124 of the blocks change between a last snapshot 130 of the primary volume 110 copied to the cascaded backup data 114 and a current snapshot 132 of the primary volume 110 available in the backup data 112, and copying blocks from the current snapshot 132 to the cascaded backup data 114 according to the bitmap 124, and jumping, by the cascaded data mover 118, to a newer snapshot 134 of the primary volume 110 when the newer snapshot 134 is available in the backup data 112 and a certain percent of the current snapshot 132 is copied to the cascaded backup data 114, by means of: creating a difference bitmap 126 of the difference between the current snapshot 132 and the newer snapshot 134, merging the difference bitmap 126 with the bitmap 124, and copying blocks from the newer snapshot 134 according to the merged bitmap
128
At step 202, the method 200 comprises storing data of a computing system in a primary volume 110 on a primary storage 104. The data of the computing system is stored in the primary storage 104 as a source data which is further backed up to the secondary storage 106 to enable restoring
of data whenever needed. In an example, the computing system herein refers to any virtual machine or physical machine that may store respective memory instructions and logical instructions in the primary storage 104. The computing system may provide the data for storage whenever there is any update or change in the data. In an example, a hypervisor of the virtual machine provides the data for storage in the primary storage 104. The primary volume 110 refers to a storage space or a disk of the primary storage 104.
At step 204, the method 200 further comprises replicating, by a data mover 116, the primary volume 110 to a backup data 112 on a secondary storage 106. The replicating enables the primary volume 110 of the primary storage 104 and the backup data 112 of the secondary storage 106 to be in synchronization i.e., all the data in primary storage 104 is backed up to the secondary storage 106. In other words, the method 200 comprises reading an entire disk i.e., primary volume 110 and sending all blocks to the secondary storage 106, to write each block into the backup data 112. The method 200 enables, transferring all the data to the secondary storage 106 to establish a consistent state with the primary storage 104. The data mover 116 comprises backing up all the data of the primary storage 104 to the secondary storage 106 for restoring whenever needed by the primary storage 104.
At step 206, the method 200 further comprises replicating, by a cascaded data mover 118, the backup data 112 on the secondary storage 106 to a cascaded backup data 114 on a tertiary storage 108. The replicating enables the backup data 112 of the secondary storage 106 and the cascaded backup data 114 of the tertiary storage 108 to be in synchronization i.e., all the data in secondary storage 106 is backed up to the tertiary storage 108. The cascaded data mover 118 comprises backing up all the data of the secondary storage 106 to the tertiary storage 108 for recovering whenever needed by the primary storage 104.
At step 208, the method 200 further comprises replicating, by means of creating a bitmap 124 of the blocks change between a last snapshot 130 of the primary volume 110 copied to the cascaded backup data 114 and a current snapshot 132 of the primary volume 110 available in the backup data 112. The last snapshot 130 includes the data stored in the blocks of the primary volume 110, at a first time, which is already copied to the cascaded backup data 114 to enable recovery of the data whenever needed. The current snapshot 132 includes the data stored in the blocks of the primary volume 110, at a second time, which is available for copying to the cascaded backup data 114. The last snapshot 130 is taken at a time which is earlier than a time at which the current snapshot 132 is taken. The method 200 comprises creating the bitmap 124
which provides information about the blocks which have changed or updated data from the time at which the last snapshot 130 is taken to the time at which the current snapshot 132 is taken.
At step 210, the method 200 further comprises replicating, by means of copying blocks from the current snapshot 132 to the cascaded backup data 114 according to the bitmap 124. The method 200 comprises, after creating the bitmap 124, copying the blocks having the changed or updated data in the current snapshot 132 to the cascaded backup data 114. Thus, by virtue of copying the current snapshot 132, the tertiary storage 108 is consistent with the secondary storage 106.
At step 212, the method 200 further comprises jumping, by the cascaded data mover 118, to a newer snapshot 134 of the primary volume 110 when the newer snapshot 134 is available in the backup data 112 and a certain percent of the current snapshot 132 is copied to the cascaded backup data 114. The method 200 comprises the primary storage 104 providing the newer snapshot 134 of the primary volume 110 to the secondary storage 106. The newer snapshot 134 includes the data stored in the blocks of the primary volume 110, at a third time. The newer snapshot 134 is taken after the current snapshot 132 is taken. The newer snapshot 134 may be received by the secondary storage 106 when the current snapshot 132 is being copied to the cascaded backup data 114. The method 200 comprises detecting when the newer snapshot 134 is available at the secondary storage 106. This detection may be based on parameters like time, date, name and the like. Further, the method 200 provides instructions to jump to the newer snapshot 134 when a certain percentage of the current snapshot 132 is copied to the cascaded backup data 114. In an example, this percentage may be 10 percent.
At step 214, the method 200 further comprises jumping, by means of creating a difference bitmap 126 of the difference between the current snapshot 132 and the newer snapshot 134. The method 200 comprises creating the difference bitmap 126 which provides information about the blocks which have changed or updated data from the time at which the current snapshot 132 is taken to the time at which the newer snapshot 134 is taken.
At step 216, the method 200 further comprises jumping, by means of merging the difference bitmap 126 with the bitmap 124. After creating the difference bitmap 126, the difference bitmap 126 is merged to the bitmap 124 to form the merged bitmap 128.
At step 218, the method 200 further comprises jumping, by means of copying blocks from the newer snapshot 134 according to the merged bitmap 128. The blocks from the newer snapshot
134 are copied to the cascaded backup data 114. Thus, the blocks in the current snapshot 132 which are left to be copied to the cascaded backup data 114 are now replaced with the blocks having changed or updated data from the newer snapshot 134. Beneficially, the method 200 in comparison to conventional methods does not have any risk of never finishing a process of copying the data from the secondary storage 106 to the tertiary storage 108 and thus the tertiary storage 108 reaches a consistent state from which restoration can be done.
According to an embodiment, the certain percent of the current snapshot 132 is defined by a replication threshold. The replication threshold refers to parameter that indicates an amount of data that must be copied from the current snapshot 132 before moving to read from the newer snapshot 134. The replication threshold ensures that in comparison to the conventional methods, there is no risk of not finishing the process of copying the data from the secondary storage 106 to the tertiary storage 108.
According to an embodiment, the method 200 further comprises increasing the replication threshold each time when the cascaded data mover 118 jumps to a new snapshot until the replication threshold reaches 100 percent. In an example, an initial value of the replication threshold may be 10 percent and may be increased by 10 percent by the method 200 each time when the cascaded data mover 118 jumps to the new snapshot. In another example, an initial value of the replication threshold may be 10 percent and may be increased by 5 percent each time. In another example, an initial value of the replication threshold may be 15 percent and may be increased by 5 percent each time. The replication threshold is increased till 100 percent is reached to enable the tertiary storage 108 to achieve a consistency with the secondary storage 106
According to an embodiment, the method 200 further comprises decreasing the replication threshold to an initial value after 100 percent of the current snapshot 132 of the primary volume 110 is copied to the cascaded backup data 114. In an example, the method 200 comprises decreasing the replication threshold to an initial value of 10 percent after 100 percent of the current snapshot 132 is copied to the cascaded backup data 114. At 100 percent the tertiary storage 108 achieves a consistency with the secondary storage 106. The replication threshold is decreased after 100 percent is reached to enable copying of data from given newer snapshots after a certain percentage of a given current snapshot is copied. Thus, the tertiary storage 108 has improved consistency with the secondary storage 106.
According to an embodiment, the replication threshold is increased linearly or exponentially. The new percentage of the replication threshold after each jump can be larger than the previous one by a constant amount or by a larger difference. In an example, an initial value of the replication threshold may be 10 percent which may be increased linearly by 10 percent till 100 percent. In another example, an initial value of the replication threshold may be 10 percent which may be increased exponentially by 5 percent, 15 percent, 30 percent and 40 percent.
According to an embodiment, the method 200 further comprises if no snapshot of the primary volume 110 is copied to the cascaded backup data 114, creating a bitmap of the size of the current snapshot 132 of the primary volume 110 available in the backup data 112 to replicate the backup data 112 on the secondary storage 106 to the cascaded backup data 114 on the tertiary storage 108. In such a case, the current snapshot 132 is a first snapshot to be copied to the cascaded backup data 114. Thus, the method 200 comprises creating the bitmap of the size of the current snapshot 132. Thus, the current snapshot 132 is copied to the tertiary storage 108 to enable synchronization of data between the primary storage 104, the secondary storage 106 and the tertiary storage 108.
The method 200 of the present disclosure provides improved data storage from the secondary storage 106 to the tertiary storage 108 when the current snapshot 132 is being copied to the tertiary storage 108. The method 200 enables the cascaded data mover 118 to not move to a newer snapshot 134 before completely coping a certain amount of data from the current snapshot 132. In comparison to conventional methods, the method 200 of the present disclosure is not at a risk of never finishing a process of copying data to the tertiary storage 108. Thus, the method 200 of the present disclosure reaches to a consistent state from which restoration can be done when needed. Further, as the tertiary storage 108 is updated to the newer snapshot 134 after certain amount of data from the current snapshot 132 is copied, a risk of data loss in conventional approaches is significantly reduced in the method 200 of the present disclosure.
In another aspect, a computer program product is provided comprising a non-transitory computer-readable storage medium having computer program code stored thereon, the computer program code being executable by a processor to execute the method 200. In another aspect, a computer program is provided to execute the method 200. Examples of implementation of the non-transitory computer-readable storage medium include, but is not limited to, Electrically Erasable Programmable Read-Only Memory (EEPROM), Random Access Memory (RAM), Read Only Memory (ROM), Hard Disk Drive (HDD), Flash memory,
a Secure Digital (SD) card, Solid-State Drive (SSD), a computer readable storage medium, and/or CPU cache memory. A computer readable storage medium for providing a non-transient memory may include, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
FIG. 3 is a block diagram of a cascaded backup system that depicts data transfer from a primary storage to a secondary storage and a tertiary storage, in accordance with an embodiment of the present disclosure. With reference to FIG. 3, there is shown a block diagram 300 of a cascaded backup system 302 that includes a secondary protection environment 304, a hypervisor 306, a virtual machine 308, a disk 310, a first snapshot 312A, a second snapshot 314B, a copied first snapshot 312B, a copied second snapshot 314B, a copy 316 and a difference bitmap 318. There is further shown the primary storage 104, a tertiary storage 108. There is shown the backup data 112, the cascaded backup data 114, the data mover 116, the cascaded data mover 118.
The secondary protection environment 304 may also be referred to as the secondary storage 106 of FIG. 1 A. The hypervisor 306 includes suitable logic, circuitry, and interfaces that may be configured to run virtual machines such as virtual machine 308. The disk 310 of the primary storage 104 may also be referred to as the primary volume 110 of FIG. 1 A which is configured to store data received from the virtual machine 308. The first snapshot 312A of the disk 310 may also be referred to as the current snapshot 132 of FIG. 1 A. The second snapshot 314A of the disk 310 may also be referred to as the newer snapshot 134 of FIG. 1A. The copied first snapshot 312B is the first snapshot 312A and the copied second snapshot 314B is the second snapshot 314B copied to the backup data 112. The blocks of the first snapshot 312B and the second snapshot 314B are stored in the copy 316.
The cascaded backup system 302 is configured to create a bitmap (not shown) of the size of the first snapshot 312A of the disk 310 available in the backup data 112 to replicate the backup data 112 to the cascaded backup data 114 on the tertiary storage 108. Further, when the second snapshot 314A of the disk 310 is available in the backup data 112 the cascaded backup system 302 is configured to jump to the second snapshot 314A when a certain percent of the first snapshot 312A is copied to the cascaded backup data 114. The cascaded backup system 302 is configured to create a difference bitmap 318 of the difference between the first snapshot 312A and the second snapshot 314A. The cascaded backup system 302 is further configured to merge
the difference bitmap 318 with the bitmap (not shown) and copy blocks from the second snapshot 314A according to the merged bitmap.
FIG. 4 is a timing diagram of a cascaded backup system that depicts data transfer from a primary storage to a secondary storage and a tertiary storage, in accordance with an embodiment of the present disclosure. With reference to FIG. 4, there is shown a block diagram 400 that includes a replication threshold value 402, a defined location 404, a zeroth snapshot 406, a first snapshot 408, a second snapshot 410, a third snapshot 412, a fourth snapshot 414, a zeroth bitmap 416, a first bitmap 418, a second bitmap 420, a third bitmap 422, a fourth bitmap 424, a fifth bitmap 426, a sixth bitmap 428, a seventh bitmap 430 and an eighth bitmap 432, a zeroth replication disk 434, a first replication disk 436, a second replication disk 438, a third replication disk 440. There is further shown the primary storage 104, the secondary storage 106, the tertiary storage 108 and the disk 310. The zeroth replication disk 434, the first replication disk 436, the second replication disk 438, the third replication disk 440 may also be referred to as the cascaded backup data 114 of FIG. 1 A at different stages of data transfer from the secondary storage 106 to tertiary storage 108.
At time TO, there is a new snapshot such as a zeroth snapshot 406 available at the secondary storage 106. Since this is a very first snapshot, a zeroth bitmap 416 in the size of the zeroth snapshot 406 is created where all its bits are ON, and started to copy the data to the zeroth replication disk 434. Further, for each block copied, the represented bit is turned OFF in the zeroth bitmap 416. The replication threshold at this step is set to 20%.
At time Tl, a new snapshot such as first snapshot 408 is available the secondary storage 106. Now, more than 20% of the zeroth snapshot 406 is previously copied to the tertiary storage 108, so the present disclosure can jump and read from the first snapshot 408. A first bitmap 418 of the difference between zeroth snapshot 406 and first snapshot 408 is created and merged with the zeroth bitmap 416. Now, the new bitmap such as a second bitmap 420 has less than 25% bits OFF (because of the merge). Further, the present disclosure, starts reading using the second bitmap 420 from the first snapshot 408 and increases the replication threshold to 50%.
At time T2, a new snapshot such as a second snapshot 410 is available. However, the present disclosure copied from the secondary storage 106 to the first replication disk 436 only 40% of the first snapshot 408, so the present disclosure cannot yet move to read from the second
snapshot 410. Thus, the present disclosure continues to read from the first snapshot 408. The third bitmap 422 has 40% bits off.
At time T3, a new snapshot such as a third snapshot 412 is available. However, the present disclosure didn’t copy from the secondary storage 106 to the second replication disk 438 more than 50% so the present disclosure continues to read from the first snapshot 408. The fourth bitmap 424 has 48% bits off.
At time T4, a 50% of the first snapshot 408 is reached so the present disclosure can jump to the third snapshot 412. Further, a bitmap such as the fifth bitmap 426 of the difference between the first snapshot 408 and the third snapshot 412 is created and merged it into current bitmap such as the fourth bitmap 424. Now the new bitmap such as the sixth bitmap 428 has less than 50% bits OFF (because of the merge). Now, the replication threshold is increased to 100%.
At time T5, a new snapshot such as a fourth snapshot 414 is available. This snapshot is ignored currently as 100% of the third snapshot 412 is not copied. The seventh bitmap 430 has 90% bits off.
At time T6, the present disclosure copied 100% of the third snapshot 412 to the third replication disk 440, and now a consistent checkpoint on the tertiary storage 108 is created that can be restored when needed. Further, the replication threshold is decreased back to its initial value of 20%. Further, a bitmap such as the eighth bitmap 432 between the third snapshot 412 and the latest snapshot such as the fourth snapshot 414 is created and copied from the secondary storage 106 to the third replication disk 440 according to this eighth bitmap 432.
Modifications to embodiments of the present disclosure described in the foregoing are possible without departing from the scope of the present disclosure as defined by the accompanying claims. Expressions such as "including", "comprising", "incorporating", "have", "is" used to describe and claim the present disclosure are intended to be construed in a non-exclusive manner, namely allowing for items, components or elements not explicitly described also to be present. Reference to the singular is also to be construed to relate to the plural. The word "exemplary" is used herein to mean "serving as an example, instance or illustration". Any embodiment described as “exemplary” is not necessarily to be construed as preferred or advantageous over other embodiments and/or to exclude the incorporation of features from other embodiments. The word "optionally" is used herein to mean "is provided in some embodiments and not provided in other embodiments". It is appreciated that certain features of
the present disclosure, which are, for clarity, described in the context of separate embodiments, may also be provided in combination in a single embodiment. Conversely, various features of the present disclosure, which are, for brevity, described in the context of a single embodiment, may also be provided separately or in any suitable combination or as suitable in any other described embodiment of the disclosure.
Claims
1. A cascaded data mover (118) for a cascaded backup system (102, 302), the cascaded backup system (102, 302) comprising a primary storage (104) with a primary volume (110) for storing data of a computing system, a secondary storage (106) for storing a backup data (112) of the primary volume (110) obtained from the primary storage (104), a tertiary storage (108) for storing a cascaded backup data (114) of the primary volume (110) obtained from the secondary storage (106), and a data mover (116) for replicating the primary volume (110) to the backup data (112) on the secondary storage (106), the cascaded data mover (118) comprising: a replication unit (120) configured to replicate the backup data (112) on the secondary storage (106) to the cascaded backup data (114) on the tertiary storage (108), by means of: creating a bitmap (124) of the blocks change between a last snapshot (130) of the primary volume (110) copied to the cascaded backup data (114) and a current snapshot (132) of the primary volume (110) available in the backup data (112), and copying blocks from the current snapshot (132) to the cascaded backup data (114) according to the bitmap (124), and a control unit (122) configured to determine when a newer snapshot (134) of the primary volume (110) is available in the backup data (112) and instruct the replication unit (120) to jump to the newer snapshot (134) when a certain percent of the current snapshot (132) is copied to the cascaded backup data (114), by means of: creating a difference bitmap (126) of the difference between the current snapshot (132) and the newer snapshot (134), merging the difference bitmap (126) with the bitmap, and copying blocks from the newer snapshot (134) according to the merged bitmap
(128).
2. The cascaded data mover (118) of claim 1, wherein the certain percent of the current snapshot (132) is defined by a replication threshold.
3. The cascaded data mover (118) of claim 2, wherein the replication threshold is increased each time when the cascaded data mover (118) jumps to a new snapshot until the replication threshold reaches 100 percent.
4. The cascaded data mover (118) of claim 3, wherein the replication threshold is decreased to an initial value after 100 percent of the current snapshot (132) of the primary volume (110) is copied to the cascaded backup data (114).
5. The cascaded data mover (118) of claim 3 or 4, wherein the replication threshold is increased linearly or exponentially.
6. The cascaded data mover (118) of any of claims 1 to 5, wherein if no snapshot of the primary volume (110) is copied to the cascaded backup data (114), the replication unit (120) is configured to create a bitmap of the size of the current snapshot (132) of the primary volume (110) available in the backup data (112) to replicate the backup data (112) on the secondary storage (106) to the cascaded backup data (114) on the tertiary storage (108).
7. A method (200) of cascaded backup, comprising: storing data of a computing system in a primary volume (110) on a primary storage
(104), replicating, by a data mover (116), the primary volume (110) to a backup data (112) on a secondary storage (106), replicating, by a cascaded data mover (118), the backup data (112) on the secondary storage (106) to a cascaded backup data (114) on a tertiary storage (108), by means of: creating a bitmap (124) of the blocks change between a last snapshot (130) of the primary volume (110) copied to the cascaded backup data (114) and a current snapshot (132) of the primary volume (110) available in the backup data (112), and copying blocks from the current snapshot (132) to the cascaded backup data
(114) according to the bitmap (124), and jumping, by the cascaded data mover (118), to a newer snapshot (134) of the primary volume (110) when the newer snapshot (134) is available in the backup data (112) and a certain percent of the current snapshot (132) is copied to the cascaded backup data (114), by means of: creating a difference bitmap (126) of the difference between the current snapshot
(132) and the newer snapshot (134), merging the difference bitmap (126) with the bitmap, and copying blocks from the newer snapshot (134) according to the merged bitmap
(128).
8. The method (200) of claim 7, wherein the certain percent of the current snapshot (132) is defined by a replication threshold.
9. The method (200) of claim 8, further comprising increasing the replication threshold each time when the cascaded data mover (118) jumps to a new snapshot until the replication threshold reaches 100 percent.
10. The method (200) of claim 9, further comprising decreasing the replication threshold to an initial value after 100 percent of the current snapshot (132) of the primary volume (110) is copied to the cascaded backup data (114).
11. The method (200) of claim 9 or 10, wherein the replication threshold is increased linearly or exponentially.
12. The method (200) of any of claims 7 to 11, comprising, if no snapshot of the primary volume (110) is copied to the cascaded backup data (114), creating a bitmap of the size of the current snapshot (132) of the primary volume (110) available in the backup data (112) to replicate the backup data (112) on the secondary storage (106) to the cascaded backup data (114) on the tertiary storage (108).
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/EP2021/065399 WO2022258163A1 (en) | 2021-06-09 | 2021-06-09 | Cascaded data mover for cascaded backup system and method of cascaded backup |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/EP2021/065399 WO2022258163A1 (en) | 2021-06-09 | 2021-06-09 | Cascaded data mover for cascaded backup system and method of cascaded backup |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022258163A1 true WO2022258163A1 (en) | 2022-12-15 |
Family
ID=76444401
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/EP2021/065399 WO2022258163A1 (en) | 2021-06-09 | 2021-06-09 | Cascaded data mover for cascaded backup system and method of cascaded backup |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2022258163A1 (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040030846A1 (en) * | 2002-08-06 | 2004-02-12 | Philippe Armangau | Data storage system having meta bit maps for indicating whether data blocks are invalid in snapshot copies |
US20070150677A1 (en) * | 2005-12-28 | 2007-06-28 | Yukiko Homma | Storage system and snapshot management method |
US20180046553A1 (en) * | 2016-08-15 | 2018-02-15 | Fujitsu Limited | Storage control device and storage system |
US10083087B1 (en) * | 2017-07-14 | 2018-09-25 | International Business Machines Corporation | Managing backup copies in cascaded data volumes |
-
2021
- 2021-06-09 WO PCT/EP2021/065399 patent/WO2022258163A1/en active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040030846A1 (en) * | 2002-08-06 | 2004-02-12 | Philippe Armangau | Data storage system having meta bit maps for indicating whether data blocks are invalid in snapshot copies |
US20070150677A1 (en) * | 2005-12-28 | 2007-06-28 | Yukiko Homma | Storage system and snapshot management method |
US20180046553A1 (en) * | 2016-08-15 | 2018-02-15 | Fujitsu Limited | Storage control device and storage system |
US10083087B1 (en) * | 2017-07-14 | 2018-09-25 | International Business Machines Corporation | Managing backup copies in cascaded data volumes |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11461202B2 (en) | Remote data replication method and system | |
US8719497B1 (en) | Using device spoofing to improve recovery time in a continuous data protection environment | |
US9286052B1 (en) | Upgrading software on a pair of nodes in a clustered environment | |
US20150046667A1 (en) | Synchronization for initialization of a remote mirror storage facility | |
US10678663B1 (en) | Synchronizing storage devices outside of disabled write windows | |
EP3361383B1 (en) | Modifying membership of replication groups via journal operations | |
US10860445B2 (en) | Methods and devices for multi-level data protection in a storage system | |
US7979651B1 (en) | Method, system, and computer readable medium for asynchronously processing write operations for a data storage volume having a copy-on-write snapshot | |
US10049020B2 (en) | Point in time recovery on a database | |
US10884871B2 (en) | Systems and methods for copying an operating source volume | |
US20190317872A1 (en) | Database cluster architecture based on dual port solid state disk | |
US20190227710A1 (en) | Incremental data restoration method and apparatus | |
CN113254048B (en) | Method, device and equipment for updating boot program and computer readable medium | |
TW202133016A (en) | Resilient software updates in secure storage devices | |
US8250036B2 (en) | Methods of consistent data protection for multi-server applications | |
US20160328165A1 (en) | Detecting modifications to a storage that occur in an alternate operating environment | |
US20240045772A1 (en) | Continuous data protection unit, recovery unit for data protection and method thereof | |
CN113986115B (en) | Method, electronic device and computer program product for copying data | |
KR100515890B1 (en) | Method of efficiently recovering database | |
US10078558B2 (en) | Database system control method and database system | |
US20130282975A1 (en) | Systems and methods for backing up storage volumes in a storage system | |
US10296517B1 (en) | Taking a back-up software agnostic consistent backup during asynchronous replication | |
CN117785546A (en) | Database backup method, system and computing device cluster | |
WO2022258163A1 (en) | Cascaded data mover for cascaded backup system and method of cascaded backup | |
EP2368187B1 (en) | Replicated file system for electronic devices |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21732242 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21732242 Country of ref document: EP Kind code of ref document: A1 |