US20160055062A1 - Systems and Methods for Maintaining a Virtual Failover Volume of a Target Computing System - Google Patents
Systems and Methods for Maintaining a Virtual Failover Volume of a Target Computing System Download PDFInfo
- Publication number
- US20160055062A1 US20160055062A1 US14/929,336 US201514929336A US2016055062A1 US 20160055062 A1 US20160055062 A1 US 20160055062A1 US 201514929336 A US201514929336 A US 201514929336A US 2016055062 A1 US2016055062 A1 US 2016055062A1
- Authority
- US
- United States
- Prior art keywords
- file
- computing system
- mirror
- virtual
- data blocks
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000000034 method Methods 0.000 title claims abstract description 58
- 238000012217 deletion Methods 0.000 claims description 8
- 230000037430 deletion Effects 0.000 claims description 8
- 238000009877 rendering Methods 0.000 claims description 6
- 238000012423 maintenance Methods 0.000 description 6
- 238000010586 diagram Methods 0.000 description 4
- 230000004048 modification Effects 0.000 description 4
- 238000012986 modification Methods 0.000 description 4
- 230000002093 peripheral effect Effects 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 238000004590 computer program Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 230000002159 abnormal effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000002939 deleterious effect Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000000670 limiting effect Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 230000007257 malfunction Effects 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 239000002184 metal Substances 0.000 description 1
- 230000036961 partial effect Effects 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 230000002829 reductive effect Effects 0.000 description 1
- 230000008439 repair process Effects 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/14—Error detection or correction of the data by redundancy in operation
- G06F11/1402—Saving, restoring, recovering or retrying
- G06F11/1415—Saving, restoring, recovering or retrying at system level
- G06F11/1417—Boot up procedures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/20—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
- G06F11/2053—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
- G06F11/2056—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring
- G06F11/2071—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring using a plurality of controllers
- G06F11/2074—Asynchronous techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/14—Error detection or correction of the data by redundancy in operation
- G06F11/1402—Saving, restoring, recovering or retrying
- G06F11/1446—Point-in-time backing up or restoration of persistent data
- G06F11/1448—Management of the data involved in backup or backup restore
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/14—Error detection or correction of the data by redundancy in operation
- G06F11/1402—Saving, restoring, recovering or retrying
- G06F11/1471—Saving, restoring, recovering or retrying involving logging of persistent data for recovery
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/1658—Data re-synchronization of a redundant component, or initial sync of replacement, additional or spare unit
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2201/00—Indexing scheme relating to error detection, to error correction, and to monitoring
- G06F2201/815—Virtual
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2201/00—Indexing scheme relating to error detection, to error correction, and to monitoring
- G06F2201/84—Using snapshots, i.e. a logical point-in-time copy of the data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2201/00—Indexing scheme relating to error detection, to error correction, and to monitoring
- G06F2201/85—Active fault masking without idle spares
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2201/00—Indexing scheme relating to error detection, to error correction, and to monitoring
- G06F2201/855—Details of asynchronous mirroring using a journal to transfer not-yet-mirrored changes
Definitions
- the present invention relates generally to systems and methods for maintaining a virtual failover volume of a target computing system, and more specifically, but not by way of limitation, to systems and methods for maintaining a virtual failover volume of a target computing system that may be utilized by a virtual machine to create a virtual failover computing system that approximates the configuration of the target computing system, upon the occurrence of a failover event.
- the systems and methods provided herein may be adapted to maintain a “ready to execute” virtual failover volume of a target computing system.
- the virtual failover system may be executed by a virtual machine to assume the functionality target computing system upon the occurrence of a failover event.
- the systems and methods may maintain the virtual failover volume in a “ready to execute: state by periodically revising a mirror of the target computing system and store the periodically revised mirror in the virtual failover volume.
- the ability of the systems and methods to periodically revise the mirror of the target computing system ensures that upon the occurrence of a failover event, a virtual machine may execute the periodically revised mirror to create a virtual failover computing system that may assume the configuration of the target computing system without substantial delay.
- the present invention provides for a method for maintaining a virtual failover volume of a target computing system that includes: (a) periodically revising a mirror of the target computing system, according to a predetermined backup schedule, the mirror being stored on the virtual failover volume resident on an appliance that is operatively associated with the target computing system, by: (i) periodically comparing the mirror to a configuration of the target computing system to determine changed data blocks relative to the mirror; (ii) storing the changed data blocks as one or more differential files in the virtual failover volume, the one or more differential files being stored separately from the mirror; and (iii) incorporating the changed data blocks into the mirror; (b) upon the occurrence of a failover event, creating a bootable image file from at least one of the mirror and one or more differential files; and (c) booting the bootable image file via a virtual machine on the appliance to create a virtual failover computing system that substantially corresponds to the target computing system at an arbitrary point in time.
- systems for maintaining a virtual failover volume of a target computing system may include: (a) a memory for storing computer readable instructions for maintaining a virtual failover volume of a file structure of a target computing system; and (b) a processor configured to execute the instructions stored in the memory to: (i) periodically revise a mirror of the target computing system, according to a predetermined backup schedule, the mirror being stored on the virtual failover volume resident on an appliance that is operatively associated with the target computing system, by: periodically compare the mirror to a configuration of the target computing system to determine changed data blocks relative to the mirror; store the changed data blocks as one or more differential files in the virtual failover volume, the one or more differential files being stored separately from the mirror; and incorporate the changed data blocks into the mirror; (ii) upon the occurrence of a failover event, create a bootable image file from at least one of the mirror and one or more differential files; and (iii) boot the bootable image file via a virtual machine on the appliance to create a virtual failover
- the present technology may be directed to non-transitory computer readable storage mediums.
- the storage medium may each have a computer program embodied thereon, the computer program executable by a processor in a computing system to perform a method for maintaining a virtual failover volume of a target computing system that includes: (a) periodically revising a mirror of the target computing system, according to a predetermined backup schedule, the mirror being stored on the virtual failover volume resident on an appliance that is operatively associated with the target computing system, by: (i) periodically comparing the mirror to a configuration of the target computing system to determine changed data blocks relative to the mirror; (ii) storing the changed data blocks as one or more differential files in the virtual failover volume, the one or more differential files being stored separately from the mirror; and (iii) incorporating the changed data blocks into the mirror; (b) upon the occurrence of a failover event, creating a bootable image file from at least one of the mirror and one or more differential files; and (c) booting the bootable image file via
- FIG. 1A is a schematic diagram of an exemplary environment for practicing aspects of the present technology.
- FIG. 1B is a diagrammatical representation of copy-on-write operations performed on a virtual failover volume.
- FIG. 2 is a block diagram of a virtual failover application.
- FIG. 3 is a diagrammatical representation of the desparsification and resparsification of the virtual failover volume.
- FIG. 5 illustrates an exemplary computing system that may be used to implement embodiments of the present technology.
- Virtual failover volumes may often be utilized as redundancy mechanisms for backing up one or more target computing systems in case of failover events (e.g., minor or major failures or abnormal terminations of the target computing systems).
- the virtual failover volume may include an approximate copy of a configuration of the target computing system.
- the configuration of the target computing system may include files stored on one or more hard drives, along with configuration information of the target computing system such as Internet protocol (IP) addresses, media access control (MAC) addresses, and the like.
- IP Internet protocol
- MAC media access control
- the configuration of the target computing system may additionally include other types of data that may be utilized by a virtual machine to create a virtual failover computing system that closely approximates the configuration of the target computing system.
- the configuration of the target computing system may be transferred to a virtual failover volume according to a backup schedule.
- methods for backing up the target computing system may include capturing a mirror (also known as a snapshot) of the target computing system.
- a mirror also known as a snapshot
- the systems and methods may capture differential files indicative of changes to the target computing system since the creation of the snapshot, or since the creation of a previous differential file.
- the differential files may be utilized to update or “revise” the mirror.
- differential files may also be known as incremental files, delta files, delta increments, differential delta increments, reverse delta increments, and other permutations of the same.
- the systems and methods may capture the mirror of the target computing system and store the data blocks of the mirror in a virtual failover volume as a bootable image file by creating a substantially identical copy of the file structure of the target computing system at a given point in time.
- systems and methods of the present technology may utilize a virtual failover volume having new technology file system (NTFS) file system
- the systems and methods may be adapted to modify the allocation strategy utilized by the NTFS file structure to more efficiently utilize the virtual storage volume.
- NTFS new technology file system
- FIG. 1A includes a schematic diagram of an exemplary environment 100 for practicing the present invention.
- Environment 100 includes a plurality of target computing systems 105 that may each be operatively connected to an appliance 110 , hereinafter referred to as “appliance 110 .”
- Each of the target computing systems 105 may include a configuration that includes one or more target storage mediums 120 such as hard drives, along with additional operating data.
- the target computing system 105 and the appliance 110 may be operative connected via a network 115 may include an encrypted VPN tunnel, a LAN, a WAN, or any other commonly utilized network connection that would be known to one of ordinary skill in the art with the present disclosure before them.
- each appliance 110 may be associated with a remote storage medium 125 that facilitates long-term storage of at least a portion of the data (e.g., differential files) from the appliances 110 in one or more virtual failover volumes 130 .
- a remote storage medium 125 that facilitates long-term storage of at least a portion of the data (e.g., differential files) from the appliances 110 in one or more virtual failover volumes 130 .
- the appliance 110 provides local backup services for maintaining a virtual failover volume of the target computing system 105 associated therewith. That is, the appliance 110 may capture a mirror indicative of the target computing system 105 (e.g., storage mediums, configuration information, etc.) and periodically capture differential files indicative of changes to the target computing system 105 relative to the mirror. Upon the occurrence of a failover event (e.g., full or partial failure or malfunction of the target computing system), the appliance 110 may boot the virtual failover volume in a virtual machine as a virtual failover computing system that approximates the target computing system 105 at an arbitrary point in time.
- a failover event e.g., full or partial failure or malfunction of the target computing system
- the appliance 110 may include computer readable instructions that, when executed by a processor of the appliance 110 , are adapted to maintain a virtual failover volume of the target computing system 105 associated therewith.
- both the target computing system 105 and the appliance 110 may be generally referred to as “a computing system” such as a computing system 500 as disclosed with respect to FIG. 5 .
- the appliance 110 may be referred to as a particular purpose computing system adapted to maintain a virtual failover volume and execute the virtual failover volume utilizing a virtual machine to create a virtual failover computing system that assumes the configuration of the target computing system 105 .
- FIG. 2 a schematic diagram is shown of an exemplary embodiment of the computer readable instructions, which in some embodiments includes an application having one or more modules, engines, and the like.
- the computer readable instructions are hereinafter referred to as a virtual failover application 200 or “application 200 .”
- the application 200 may generally include a disk maintenance module 205 , an obtain mirror module 210 , an analysis module 215 , a revise mirror module 220 , a render mirror module 225 , a resparsification module 230 , and a virtual machine 235 . It is noteworthy that the application 200 may be composed of more or fewer modules and engines (or combinations of the same) and still fall within the scope of the present technology.
- the virtual failover volume 130 may include a sparse file.
- a sparse file may include a sparse file structure that is adapted to hold, for example, two terabytes worth of data.
- the rest of the data blocks of the virtual failover volume 130 may be empty or “free,” in that they include no actual data other than metadata that may inform the NTFS file system that the blocks are available for writing.
- the NTFS file system may transparently convert metadata representing empty blocks into free blocks filled with zero bytes at runtime.
- the virtual failover volume 130 may include additional storage space for one or more differential files in a differential block store 140 .
- the differential block store 140 may include differential files 140 b, 104 d, and 140 f that are indicative of changes to one or more files of the target computing system 105 relative to the backing store 135 .
- the differential block store 140 may be stored separately from the backing store 135 on the virtual failover volume 130 , along with sufficient working space to accommodate a copy of the set of differential files created during subsequent backups of the target computing system 105 .
- the virtual failover volume 130 may also include additional operating space (not shown) for the virtual machine 235 to operate at a reasonable level (e.g., files created or modified by the virtual failover computing system) for a given period of time, which in some cases is approximately one month.
- the analysis module 215 may be adapted to utilize a copy on write functionality to store differential files separately from the backing store 135 .
- An exemplary “write” operation 145 illustrates a differential file 140 f being written into the differential block store 140 .
- changed data blocks included in the one or more differential files may be incorporated into the backing store 135 via the revise mirror module 220 , as will be discussed in greater detail below.
- the application 200 may read (rather than directly open) data blocks from the backing store 135 and the one or more differential files independently from one another, utilizing a copy on write functionality. Utilization of the copy on write functionality may prevent changes to the backing store 135 that may occur if the backing store 135 is opened by the NTFS file system. It is noteworthy that directly opening the backing store 135 may modify the backing store 135 and compromise the integrity of the backing store 135 .
- each of the blocks of the backing store 135 is a “free” or sparse block such that the obtain mirror module 210 may move or “copy” the blocks of data from the target computing system 105 to the sparse blocks of the backing store 135 .
- Exemplary empty or “free” blocks of the backing store 135 are shown as free blocks 150 .
- the backing store 135 may include occupied blocks such as 135 a and 135 e indicative of data blocks copied from the target computing system 105 .
- the obtain mirror module 210 may be executed to copy data blocks from the target computing system 105 into the backing store 135 to occupy at least a portion of the free blocks 150 to create a mirror or “snapshot” of the target computing system 105 .
- the backing store 135 may be stored as a bootable image file, such as a Windows® root file system, that may be executed by the virtual machine 235 .
- the virtual machine 235 may utilize a corresponding Windows® operating system to boot the bootable image file.
- the analysis module 215 may be executed periodically (typically according to a backup schedule) to determine the changed data blocks of the target computing device 105 relative to the data blocks of the backing store 135 .
- the determined changed data blocks may be stored in the differential block store 140 as one or more differential files.
- each execution of the analysis module 215 that determines changed blocks results in the creation of a separate differential file.
- Changed blocks stored in the differential block store 140 that are obtained by the analysis module 215 may be utilized by the revise mirror module 220 to revise the mirror (e.g., backing store 135 ) of the target computing system 105 . It will be understood that the process of revising the mirror may occur according to a predetermined backup schedule.
- the render mirror module 225 may utilize the mirror alone, or the mirror and the revised differential file, to render a bootable image file from one or more mirrors, and/or one or more mirrors and one or more differential files to created a virtual failover computing system that approximates the configuration of the target computing system 105 at an arbitrary point in time.
- the backup methods utilized by the appliance 110 e.g., the mirror and differential files are stored in a virtual failover volume 130 ) allow for the quick and efficient rendering of bootable disk images.
- bootable disk images may be utilized by the virtual machine 235 to launch a virtual failover computing system that approximates the configuration of the target computing system 105 at an arbitrary point in time without substantial delay caused by copying all of (or even a substantial portion) the backed-up data blocks from an unorganized state to a bootable image file that approximates the root file system of the target computing system upon the occurrence of the failover event.
- the virtual failover volume 130 may be kept in a “ready to execute” format such that upon the occurrence of a failover event, the render mirror module 225 may be executed to render the mirror and the revisable differential file to create a bootable image file that is utilized by the virtual machine 235 to establish a virtual failover computing system that substantially corresponds to the configuration of the target computing system 105 as it existed right before the occurrence of the failover event.
- the virtual machine 235 may utilize data blocks from the differential block store 140 , in addition to data blocks from the backing store 135 .
- the virtual machine 235 may utilize copy on write functionalities to obtain data blocks from the backing store 135 along with data blocks from the differential block store 140 that are situated temporally between the mirror and an arbitrary point in time. The combination of the data blocks allows the virtual machine 235 to recreate the file approximately as it appeared on the target computing system 105 at the arbitrary point in time.
- the virtual machine 235 may recreate a file 160 by utilizing a “read” copy on write functionality to read data blocks 135 a and 135 e from the backing store 135 and differential files 140 d and 140 f from the differential block store 140 .
- the virtual machine 235 assembles the data blocks and differential files to create the file 160 .
- the virtual failover computing system may utilize additional configuration details of the target computing system 105 , such as a media access control (MAC) address, an Internet protocol (IP) address, or other suitable information indicative of the location or identification of the target computing system 105 .
- the virtual machine 235 may also update registry entries or perform any other necessary startup operations such that the virtual failover computing system may function substantially similarly to the target computing system 105 .
- the virtual machine 235 may also create, delete, and modify files just as the target computing system 105 would, although changed data blocks indicative of the modify files may be stored in the additional operating space created in the virtual failover volume 130 . Moreover, as data blocks may be deleted from the virtual failover volume 130 .
- the virtual failover volume 130 may utilize NTFS file system
- allocation strategies may cause the virtual machine 235 to overlook deleted blocks that have not been converted to free blocks by the NTFS file system. For example, modifications to the backing store 135 by the revise mirror module 220 and routine deletion of differential files from the differential data store 140 may result in deleted blocks. It will be understood that a deleted block is a data block that has been marked for deletion by the NTFS file system, but that still retains a portion of the deleted data block.
- Allocation strategies of the NTFS file system may cause data blocks that are being written into the virtual failover volume 130 to be written into the next available free block(s), leading to desparsification.
- the resparsification module 230 may be adapted to resparsify the virtual failover volume 130 .
- the NTFS file system may notify the underlying XFS file system of the appliance 110 (which holds the backing store 135 ), to resparsify the one or more deleted blocks, returning them to the sparse state.
- FIG. 3 illustrates the desparsification operation 305 a of a portion of the backing store 300 by the NTFS file system when the NTFS file system attempts to write four data blocks into the backing store 300 .
- the backing store 300 is shown as including occupied blocks 310 a and 310 c, along with deleted blocks 310 b , 310 d, and 310 e, and sparse blocks 310 f - i . It will be understood that the allocation strategy of the NTFS files system begins selecting the first block 310 a at the beginning of the portion of the backing store 300 .
- the allocation strategy would have selected sparse blocks 310 f - i , thus desparsifying four blocks 310 f - i instead of one, such as block 310 f.
- the resparsification module 230 may be adapted to perform a resparsification operation 305 b on the backing store 300 .
- resparsification module 230 may be adapted to cause the NTFS file system to notify the underlying XFS file system of the appliance 110 (which holds the backing store 135 ), to resparsify the deleted blocks 310 b, 310 d, and 310 e.
- data may be written to the resparsified blocks 310 b, 310 d, 310 e , desparsifying only one data block 310 f.
- resparsification module 230 has been disclosed within the context of maintaining virtual failover volumes of target computing systems, the applicability of resparsifying a virtual volume may extend to other systems and methods that would benefit from the ability of one or more virtual machines to efficiently store data on a virtual volume.
- the application 200 may be adapted to optimize the virtual failover volume 130 by limiting metadata updates to the virtual failover volume 130 by the NTFS file system.
- the backing store 135 may be utilized by the virtual machine 235 as a file (e.g., bootable disk image) that may be stored in an XFS file system on the appliance 110 .
- the XFS file system may commit a metadata change to record an “mtime” (e.g., modification time) of a data block.
- mtime e.g., modification time
- some virtual machines 230 may utilize a semi-synchronous NTFS file system within the backing store 135 , a very high quantity of 512 byte clusters may be written, each invoking a metadata update to the virtual machine XFS file system.
- These metadata updates cause the virtual machine XFS file system to serialize inputs and outputs behind transactional journal commits, causing deleterious performance of the virtual machine 235 .
- the virtual machine 235 may be adapted to open files (comprised of data blocks or differential data blocks) using a virtual machine XFS internal kernel call that may open a file by way of a handle.
- the virtual machine 235 may use this method for both backup and restore functionality. Therefore, a file opened utilizing this method may allow the virtual machine 235 to omit “mtime” updates, thereby reducing journal commits and significantly improving write performance.
- the virtual machine 235 may utilize memory efficient data block locking functionalities for asynchronous input and output actions. That is, the functionality utilized by the virtual machine 235 may be inefficient at memory utilization, especially when locking data blocks during asynchronous input and/or output actions.
- the virtual machines 230 utilized in accordance with the present technology may be adapted to utilize alternate lock management systems which create locks as needed and store the locked data blocks in a ‘splay-tree’ while active.
- the splay tree is a standard data structure that provides the virtual machine 235 with the ability to rapidly lookup of node while modifying the root node to be near recently accessed data blocks.
- the memory footprint of the appliance 110 may be reduced without an associated compromise of lookup speed. It will be understood that large virtual failover volumes may be access using this method.
- the virtual machine 235 may be paused or “locked.” Pausing the virtual machine 235 preserves the state of the virtual failover volume 130 . Moreover, the paused virtual failover volume 130 may be copied directly to the repaired target computing system allowing for a virtual to physical conversion of the virtual failover volume 130 to the target storage medium 120 of the repaired target computing device.
- the bootable image file created from the virtual failover volume 130 may be discarded and the virtual failover volume 130 may be returned to a data state that approximates the data state of the virtual failover volume 130 before the bootable image file was created by the obtain mirror module 210 .
- the virtual failover volume 130 may then be reutilized with the repaired target computing system.
- the method 400 may include the step 405 of allocating a virtual failover volume having a two terabyte size, and a step 410 of formatting the virtual failover volume utilizing an NTFS file structure.
- the method 400 may include the step 415 of periodically obtaining a mirror of a target computing system on a virtual failover volume as a bootable image file.
- the step 415 may include periodically comparing the mirror to a configuration of the target computing system to determine changed data blocks relative to the mirror, and storing the changed data blocks as one or more differential files in the virtual failover volume, the one or more differential files being stored separately from the mirror. It will be understood that the periodically obtained mirrors may be stored on a virtual volume as a Windows® root file system.
- the method 400 may include the step 420 of receiving information indicative of a failover event (e.g., failure of the target computing system). Upon information indicative of a failover event, the method 400 may include the step 425 of rendering a bootable image file from the mirror that has been periodically revised.
- a failover event e.g., failure of the target computing system.
- the method 400 may include the step 430 of booting the bootable image file via a virtual machine to create a virtual failover computing system. It will be understood that the configuration of the virtual failover computing system may closely approximate the configuration of the target computing system at the failover event.
- the method 400 may include an optional step 435 of rendering a bootable image file that approximates the configuration of the target computing system at an arbitrary point in time utilizing one or more mirrors and one or more differential files, rather than only utilizing the mirror.
- the step 435 may include walking the mirror back in time utilizing the one or more differential files to recreate the configuration of the target computing system as it was at the arbitrary point in time.
- FIG. 5 illustrates an exemplary computing system 500 that may be used to implement an embodiment of the present technology.
- the system 500 of FIG. 5 may be implemented in the contexts of the target computing devices 105 and the appliance 110 .
- the computing system 500 of FIG. 5 includes one or more processors 510 and main memory 520 .
- Main memory 520 stores, in part, instructions and data for execution by processor 510 .
- Main memory 520 may store the executable code when in operation.
- the system 500 of FIG. 5 further includes a mass storage device 530 , portable storage medium drive(s) 540 , output devices 550 , user input devices 560 , a graphics display 570 , and peripheral devices 580 .
- FIG. 5 The components shown in FIG. 5 are depicted as being connected via a single bus 590 .
- the components may be connected through one or more data transport means.
- Processor unit 510 and main memory 520 may be connected via a local microprocessor bus, and the mass storage device 530 , peripheral device(s) 580 , portable storage device 540 , and display system 570 may be connected via one or more input/output (I/O) buses.
- I/O input/output
- Mass storage device 530 which may be implemented with a magnetic disk drive or an optical disk drive, is a non-volatile storage device for storing data and instructions for use by processor unit 510 . Mass storage device 530 may store the system software for implementing embodiments of the present invention for purposes of loading that software into main memory 520 .
- Portable storage device 540 operates in conjunction with a portable non-volatile storage medium, such as a floppy disk, compact disk, digital video disc, or USB storage device, to input and output data and code to and from the computer system 500 of FIG. 5 .
- a portable non-volatile storage medium such as a floppy disk, compact disk, digital video disc, or USB storage device.
- the system software for implementing embodiments of the present invention may be stored on such a portable medium and input to the computer system 500 via the portable storage device 540 .
- Input devices 560 provide a portion of a user interface.
- Input devices 560 may include an alphanumeric keypad, such as a keyboard, for inputting alpha-numeric and other information, or a pointing device, such as a mouse, a trackball, stylus, or cursor direction keys.
- the system 500 as shown in FIG. 5 includes output devices 550 . Suitable output devices include speakers, printers, network interfaces, and monitors.
- Display system 570 may include a liquid crystal display (LCD) or other suitable display device.
- Display system 570 receives textual and graphical information, and processes the information for output to the display device.
- LCD liquid crystal display
- Peripherals 580 may include any type of computer support device to add additional functionality to the computer system.
- Peripheral device(s) 580 may include a modem or a router.
- the components provided in the computer system 500 of FIG. 5 are those typically found in computer systems that may be suitable for use with embodiments of the present invention and are intended to represent a broad category of such computer components that are well known in the art.
- the computer system 500 of FIG. 5 may be a personal computer, hand held computing system, telephone, mobile computing system, workstation, server, minicomputer, mainframe computer, or any other computing system.
- the computer may also include different bus configurations, networked platforms, multi-processor platforms, etc.
- Various operating systems may be used including Unix, Linux, Windows, Macintosh OS, Palm OS, Android, iPhone OS and other suitable operating systems.
- Computer-readable storage media refer to any medium or media that participate in providing instructions to a central processing unit (CPU), a processor, a microcontroller, or the like. Such media may take forms including, but not limited to, non-volatile and volatile media such as optical or magnetic disks and dynamic memory, respectively. Common forms of computer-readable storage media include a floppy disk, a flexible disk, a hard disk, magnetic tape, any other magnetic storage medium, a CD-ROM disk, digital video disk (DVD), any other optical storage medium, RAM, PROM, EPROM, a FLASHEPROM, any other memory chip or cartridge.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Quality & Reliability (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Retry When Errors Occur (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
Description
- This application is a continuation of, and claims the priority benefit of, U.S. patent application Ser. No. 13/030,073, entitled “SYSTEMS AND METHODS FOR MAINTAINING A VIRTUAL FAILOVER VOLUME OF A TARGET COMPUTING SYSTEM,” filed on Feb. 17, 2011, which in turn relates to U.S. patent application Ser. No. 12/895,275, entitled “SYSTEMS AND METHODS FOR RESTORING A FILE,” filed on Sep. 30, 2010. The above disclosures are hereby incorporated by reference in their entirety, including all references cited therein.
- The present invention relates generally to systems and methods for maintaining a virtual failover volume of a target computing system, and more specifically, but not by way of limitation, to systems and methods for maintaining a virtual failover volume of a target computing system that may be utilized by a virtual machine to create a virtual failover computing system that approximates the configuration of the target computing system, upon the occurrence of a failover event.
- Generally speaking, the systems and methods provided herein may be adapted to maintain a “ready to execute” virtual failover volume of a target computing system. The virtual failover system may be executed by a virtual machine to assume the functionality target computing system upon the occurrence of a failover event.
- The systems and methods may maintain the virtual failover volume in a “ready to execute: state by periodically revising a mirror of the target computing system and store the periodically revised mirror in the virtual failover volume. The ability of the systems and methods to periodically revise the mirror of the target computing system ensures that upon the occurrence of a failover event, a virtual machine may execute the periodically revised mirror to create a virtual failover computing system that may assume the configuration of the target computing system without substantial delay.
- According to exemplary embodiments, the present invention provides for a method for maintaining a virtual failover volume of a target computing system that includes: (a) periodically revising a mirror of the target computing system, according to a predetermined backup schedule, the mirror being stored on the virtual failover volume resident on an appliance that is operatively associated with the target computing system, by: (i) periodically comparing the mirror to a configuration of the target computing system to determine changed data blocks relative to the mirror; (ii) storing the changed data blocks as one or more differential files in the virtual failover volume, the one or more differential files being stored separately from the mirror; and (iii) incorporating the changed data blocks into the mirror; (b) upon the occurrence of a failover event, creating a bootable image file from at least one of the mirror and one or more differential files; and (c) booting the bootable image file via a virtual machine on the appliance to create a virtual failover computing system that substantially corresponds to the target computing system at an arbitrary point in time.
- According to other embodiments, systems for maintaining a virtual failover volume of a target computing system may include: (a) a memory for storing computer readable instructions for maintaining a virtual failover volume of a file structure of a target computing system; and (b) a processor configured to execute the instructions stored in the memory to: (i) periodically revise a mirror of the target computing system, according to a predetermined backup schedule, the mirror being stored on the virtual failover volume resident on an appliance that is operatively associated with the target computing system, by: periodically compare the mirror to a configuration of the target computing system to determine changed data blocks relative to the mirror; store the changed data blocks as one or more differential files in the virtual failover volume, the one or more differential files being stored separately from the mirror; and incorporate the changed data blocks into the mirror; (ii) upon the occurrence of a failover event, create a bootable image file from at least one of the mirror and one or more differential files; and (iii) boot the bootable image file via a virtual machine on the appliance to create a virtual failover computing system that substantially corresponds to the target computing system at an arbitrary point in time.
- In some embodiments, the present technology may be directed to non-transitory computer readable storage mediums. The storage medium may each have a computer program embodied thereon, the computer program executable by a processor in a computing system to perform a method for maintaining a virtual failover volume of a target computing system that includes: (a) periodically revising a mirror of the target computing system, according to a predetermined backup schedule, the mirror being stored on the virtual failover volume resident on an appliance that is operatively associated with the target computing system, by: (i) periodically comparing the mirror to a configuration of the target computing system to determine changed data blocks relative to the mirror; (ii) storing the changed data blocks as one or more differential files in the virtual failover volume, the one or more differential files being stored separately from the mirror; and (iii) incorporating the changed data blocks into the mirror; (b) upon the occurrence of a failover event, creating a bootable image file from at least one of the mirror and one or more differential files; and (c) booting the bootable image file via a virtual machine on the appliance to create a virtual failover computing system that substantially corresponds to the target computing system at an arbitrary point in time.
-
FIG. 1A is a schematic diagram of an exemplary environment for practicing aspects of the present technology. -
FIG. 1B is a diagrammatical representation of copy-on-write operations performed on a virtual failover volume. -
FIG. 2 is a block diagram of a virtual failover application. -
FIG. 3 is a diagrammatical representation of the desparsification and resparsification of the virtual failover volume. -
FIG. 4 is a flowchart of an exemplary method for maintaining a virtual failover volume and launching the virtual failover volume via a virtual machine. -
FIG. 5 illustrates an exemplary computing system that may be used to implement embodiments of the present technology. - While this invention is susceptible of embodiment in many different forms, there is shown in the drawings and will herein be described in detail several specific embodiments with the understanding that the present disclosure is to be considered as an exemplification of the principles of the invention and is not intended to limit the invention to the embodiments illustrated.
- Virtual failover volumes may often be utilized as redundancy mechanisms for backing up one or more target computing systems in case of failover events (e.g., minor or major failures or abnormal terminations of the target computing systems). The virtual failover volume may include an approximate copy of a configuration of the target computing system. In some embodiments, the configuration of the target computing system may include files stored on one or more hard drives, along with configuration information of the target computing system such as Internet protocol (IP) addresses, media access control (MAC) addresses, and the like. The configuration of the target computing system may additionally include other types of data that may be utilized by a virtual machine to create a virtual failover computing system that closely approximates the configuration of the target computing system.
- When backing up the target computing system, the configuration of the target computing system may be transferred to a virtual failover volume according to a backup schedule.
- According to some embodiments, methods for backing up the target computing system may include capturing a mirror (also known as a snapshot) of the target computing system. To save space on the virtual failover volume, rather than capturing subsequent mirrors, the systems and methods may capture differential files indicative of changes to the target computing system since the creation of the snapshot, or since the creation of a previous differential file. The differential files may be utilized to update or “revise” the mirror.
- It will be understood that because of the relatively small size of differential files relative to the mirror, significant space may be save on the virtual failover volume relative to capturing multiple mirrors. It is noteworthy that differential files may also be known as incremental files, delta files, delta increments, differential delta increments, reverse delta increments, and other permutations of the same.
- It will be understood that exemplary methods for creating mirrors and differential files of target computing systems are provided in greater detail with regard to U.S. patent application Ser. No. 12/895,275, entitled “SYSTEMS AND METHODS FOR RESTORING A FILE,” filed on Sep. 30, 2010, which is hereby incorporated by reference herein in its entirety, including all references cited therein.
- The systems and methods may capture the mirror of the target computing system and store the data blocks of the mirror in a virtual failover volume as a bootable image file by creating a substantially identical copy of the file structure of the target computing system at a given point in time.
- As stated above, rather than capturing additional mirrors of the target storage volume, the systems and methods may capture one or more differential files indicative of changes to the target computing system at one or more points in time after the rendering of the mirror. These changes may be stored in files separate from the mirror and may be retained on the virtual failover volume for a predetermined period of time. The systems and methods may be able to utilize these differential files to walk backwards in time to recreate a virtual failover computing system indicative of the configuration of the target computing system at an arbitrary point in time in the past. It will be understood that the further back in time the systems and methods must go to recreate a virtual failover computing system of the target computing system, the longer the process becomes to launch the virtual failover computing system.
- If a failover event occurs before the systems and methods have updated the mirror utilizing one or more differential files, the systems and methods may boot the bootable image file and additionally read changed blocks from one or more differential files on the fly by way of a copy on write functionality to create a virtual failover computing system (e.g., rendering a mirror of the target computing system) that approximates the configuration of the target computing system.
- Additionally, because the systems and methods of the present technology may utilize a virtual failover volume having new technology file system (NTFS) file system, the systems and methods may be adapted to modify the allocation strategy utilized by the NTFS file structure to more efficiently utilize the virtual storage volume.
- Referring now to the drawings,
FIG. 1A includes a schematic diagram of anexemplary environment 100 for practicing the present invention.Environment 100 includes a plurality oftarget computing systems 105 that may each be operatively connected to anappliance 110, hereinafter referred to as “appliance 110.” Each of thetarget computing systems 105 may include a configuration that includes one or moretarget storage mediums 120 such as hard drives, along with additional operating data. It will be understood that in some embodiments, thetarget computing system 105 and theappliance 110 may be operative connected via anetwork 115 may include an encrypted VPN tunnel, a LAN, a WAN, or any other commonly utilized network connection that would be known to one of ordinary skill in the art with the present disclosure before them. - According to some embodiments, each
appliance 110 may be associated with aremote storage medium 125 that facilitates long-term storage of at least a portion of the data (e.g., differential files) from theappliances 110 in one or morevirtual failover volumes 130. - Generally speaking, the
appliance 110 provides local backup services for maintaining a virtual failover volume of thetarget computing system 105 associated therewith. That is, theappliance 110 may capture a mirror indicative of the target computing system 105 (e.g., storage mediums, configuration information, etc.) and periodically capture differential files indicative of changes to thetarget computing system 105 relative to the mirror. Upon the occurrence of a failover event (e.g., full or partial failure or malfunction of the target computing system), theappliance 110 may boot the virtual failover volume in a virtual machine as a virtual failover computing system that approximates thetarget computing system 105 at an arbitrary point in time. - The
appliance 110 may include computer readable instructions that, when executed by a processor of theappliance 110, are adapted to maintain a virtual failover volume of thetarget computing system 105 associated therewith. - According to some exemplary embodiments, both the
target computing system 105 and theappliance 110 may be generally referred to as “a computing system” such as acomputing system 500 as disclosed with respect toFIG. 5 . However, it will be understood that theappliance 110 may be referred to as a particular purpose computing system adapted to maintain a virtual failover volume and execute the virtual failover volume utilizing a virtual machine to create a virtual failover computing system that assumes the configuration of thetarget computing system 105. - Referring now to
FIG. 2 , a schematic diagram is shown of an exemplary embodiment of the computer readable instructions, which in some embodiments includes an application having one or more modules, engines, and the like. For purposes of brevity, the computer readable instructions are hereinafter referred to as avirtual failover application 200 or “application 200.” - According to some embodiments, the
application 200 may generally include adisk maintenance module 205, an obtainmirror module 210, ananalysis module 215, arevise mirror module 220, arender mirror module 225, aresparsification module 230, and avirtual machine 235. It is noteworthy that theapplication 200 may be composed of more or fewer modules and engines (or combinations of the same) and still fall within the scope of the present technology. - The
disk maintenance module 205 may be adapted to create avirtual failover volume 130 on theappliance 110. According to some embodiments, thedisk maintenance module 205 may allocate two terabytes of space for thevirtual failover volume 130 for each drive associated with thetarget computing system 105. In some applications, thedisk maintenance module 205 may be adapted to mount thevirtual failover volume 130 and format thevirtual failover volume 130 utilizing a new technology file system (NTFS) file system. While thedisk maintenance module 205 has been disclosed as allocating and formatting a two terabytevirtual failover volume 130 utilizing a NTFS file system, other sizes and formatting procedures that would be known to one of ordinary skill in the art may likewise be utilized in accordance with the present technology. - In some embodiments, the
virtual failover volume 130 may include a sparse file. Generally speaking, a sparse file may include a sparse file structure that is adapted to hold, for example, two terabytes worth of data. In practice, while two terabytes worth of space has been allocated, only a portion of thevirtual failover volume 130 may actually be filled with data blocks. The rest of the data blocks of thevirtual failover volume 130 may be empty or “free,” in that they include no actual data other than metadata that may inform the NTFS file system that the blocks are available for writing. When read by the NTFS file system, the NTFS file system may transparently convert metadata representing empty blocks into free blocks filled with zero bytes at runtime. - Referring now to
FIGS. 1A-B and 2 collectively, according to some embodiments, thevirtual failover volume 130 may include abacking store 135 that includes the data blocks copied or moved from thetarget computing system 105 via the obtainmirror module 210. For example, thebacking store 135 may include data blocks such as data block 135 a and data block 135 e. It will be understood that data block 135 a and data block 135 e may correspond to a single file or a plurality of files on thetarget computing system 105, or may include configuration information (e.g., MAC address, IP address, etc.) indicative of thetarget computing system 105. - In addition to the
backing store 135, thevirtual failover volume 130 may include additional storage space for one or more differential files in adifferential block store 140. For example, thedifferential block store 140 may includedifferential files target computing system 105 relative to thebacking store 135. - It will be understood that the
differential block store 140 may be stored separately from thebacking store 135 on thevirtual failover volume 130, along with sufficient working space to accommodate a copy of the set of differential files created during subsequent backups of thetarget computing system 105. Moreover, thevirtual failover volume 130 may also include additional operating space (not shown) for thevirtual machine 235 to operate at a reasonable level (e.g., files created or modified by the virtual failover computing system) for a given period of time, which in some cases is approximately one month. - It will be understood that because direct modification of the
backing store 135 via thevirtual machine 235 may lead to corruption of thebacking store 135, the differential files may be stored separately from thebacking store 135 in thedifferential block store 140. Therefore, theanalysis module 215 may be adapted to utilize a copy on write functionality to store differential files separately from thebacking store 135. An exemplary “write”operation 145 illustrates adifferential file 140 f being written into thedifferential block store 140. - In some applications, changed data blocks included in the one or more differential files may be incorporated into the
backing store 135 via the revisemirror module 220, as will be discussed in greater detail below. However, it will be understood that once thevirtual machine 235 has booted the bootable image file of thevirtual failover volume 130, theapplication 200 may read (rather than directly open) data blocks from thebacking store 135 and the one or more differential files independently from one another, utilizing a copy on write functionality. Utilization of the copy on write functionality may prevent changes to thebacking store 135 that may occur if thebacking store 135 is opened by the NTFS file system. It is noteworthy that directly opening thebacking store 135 may modify thebacking store 135 and compromise the integrity of thebacking store 135. - Upon an initial preparation of the
virtual failover volume 130 by thedisk maintenance module 205, each of the blocks of thebacking store 135 is a “free” or sparse block such that the obtainmirror module 210 may move or “copy” the blocks of data from thetarget computing system 105 to the sparse blocks of thebacking store 135. Exemplary empty or “free” blocks of thebacking store 135 are shown asfree blocks 150. Moreover, thebacking store 135 may include occupied blocks such as 135 a and 135 e indicative of data blocks copied from thetarget computing system 105. - As stated above, the obtain
mirror module 210 may be executed to copy data blocks from thetarget computing system 105 into thebacking store 135 to occupy at least a portion of thefree blocks 150 to create a mirror or “snapshot” of thetarget computing system 105. It will be understood that thebacking store 135 may be stored as a bootable image file, such as a Windows® root file system, that may be executed by thevirtual machine 235. In some embodiments, thevirtual machine 235 may utilize a corresponding Windows® operating system to boot the bootable image file. - The
analysis module 215 may be executed periodically (typically according to a backup schedule) to determine the changed data blocks of thetarget computing device 105 relative to the data blocks of thebacking store 135. The determined changed data blocks may be stored in thedifferential block store 140 as one or more differential files. In some embodiments, each execution of theanalysis module 215 that determines changed blocks results in the creation of a separate differential file. - Changed blocks stored in the
differential block store 140 that are obtained by theanalysis module 215 may be utilized by the revisemirror module 220 to revise the mirror (e.g., backing store 135) of thetarget computing system 105. It will be understood that the process of revising the mirror may occur according to a predetermined backup schedule. - Upon the occurrence of a failover event, the render
mirror module 225 may utilize the mirror alone, or the mirror and the revised differential file, to render a bootable image file from one or more mirrors, and/or one or more mirrors and one or more differential files to created a virtual failover computing system that approximates the configuration of thetarget computing system 105 at an arbitrary point in time. In contrast to backup methods that store data blocks to a backup storage medium in an unorganized (e.g., not substantially corresponding to a root file system of the target computing system) manner, the backup methods utilized by the appliance 110 (e.g., the mirror and differential files are stored in a virtual failover volume 130) allow for the quick and efficient rendering of bootable disk images. - It will be understood that these bootable disk images may be utilized by the
virtual machine 235 to launch a virtual failover computing system that approximates the configuration of thetarget computing system 105 at an arbitrary point in time without substantial delay caused by copying all of (or even a substantial portion) the backed-up data blocks from an unorganized state to a bootable image file that approximates the root file system of the target computing system upon the occurrence of the failover event. - According to some embodiments, to facilitate rapid failover to the
virtual machine 235, theapplication 200 may be adapted to utilize a revisable differential file. As such, theanalysis module 215 may be adapted to periodically update a revisable differential file. In some embodiments, theanalysis module 215 may update the revisable differential file by comparing the revisable differential file to the current configuration of the target computing system to determine changed data blocks relative to the revisable differential file. Next, theanalysis module 215 may combine the determined changed data blocks into the revisable differential file to create an updated differential file that takes the place of the revisable differential file. Moreover, rather than discarding the revisable differential file, it may be stored in a differential file archive located on at least one of theremote storage device 125 of thevirtual failover volume 130. - As such, the
virtual failover volume 130 may be kept in a “ready to execute” format such that upon the occurrence of a failover event, the rendermirror module 225 may be executed to render the mirror and the revisable differential file to create a bootable image file that is utilized by thevirtual machine 235 to establish a virtual failover computing system that substantially corresponds to the configuration of thetarget computing system 105 as it existed right before the occurrence of the failover event. - During operation of the
virtual machine 235, if thevirtual machine 235 reads a file from thevirtual failover volume 130, thevirtual machine 235 may utilize data blocks from thedifferential block store 140, in addition to data blocks from thebacking store 135. Thevirtual machine 235 may utilize copy on write functionalities to obtain data blocks from thebacking store 135 along with data blocks from thedifferential block store 140 that are situated temporally between the mirror and an arbitrary point in time. The combination of the data blocks allows thevirtual machine 235 to recreate the file approximately as it appeared on thetarget computing system 105 at the arbitrary point in time. - With particular emphasis on
FIG. 1B , in anexemplary operation 155, thevirtual machine 235 may recreate afile 160 by utilizing a “read” copy on write functionality to read data blocks 135 a and 135 e from thebacking store 135 anddifferential files differential block store 140. Thevirtual machine 235 assembles the data blocks and differential files to create thefile 160. - In addition to launching the
virtual machine 235 to create a virtual failover computing system that approximates configuration of thetarget computing system 105, the virtual failover computing system may utilize additional configuration details of thetarget computing system 105, such as a media access control (MAC) address, an Internet protocol (IP) address, or other suitable information indicative of the location or identification of thetarget computing system 105. Thevirtual machine 235 may also update registry entries or perform any other necessary startup operations such that the virtual failover computing system may function substantially similarly to thetarget computing system 105. - During operation, the
virtual machine 235 may also create, delete, and modify files just as thetarget computing system 105 would, although changed data blocks indicative of the modify files may be stored in the additional operating space created in thevirtual failover volume 130. Moreover, as data blocks may be deleted from thevirtual failover volume 130. - Because the
virtual failover volume 130 may utilize NTFS file system, allocation strategies may cause thevirtual machine 235 to overlook deleted blocks that have not been converted to free blocks by the NTFS file system. For example, modifications to thebacking store 135 by the revisemirror module 220 and routine deletion of differential files from thedifferential data store 140 may result in deleted blocks. It will be understood that a deleted block is a data block that has been marked for deletion by the NTFS file system, but that still retains a portion of the deleted data block. - Allocation strategies of the NTFS file system may cause data blocks that are being written into the
virtual failover volume 130 to be written into the next available free block(s), leading to desparsification. To counteract this desparsification, theresparsification module 230 may be adapted to resparsify thevirtual failover volume 130. In some embodiments, the NTFS file system may notify the underlying XFS file system of the appliance 110 (which holds the backing store 135), to resparsify the one or more deleted blocks, returning them to the sparse state. -
FIG. 3 illustrates thedesparsification operation 305 a of a portion of thebacking store 300 by the NTFS file system when the NTFS file system attempts to write four data blocks into thebacking store 300. Thebacking store 300 is shown as includingoccupied blocks blocks sparse blocks 310 f-i. It will be understood that the allocation strategy of the NTFS files system begins selecting thefirst block 310 a at the beginning of the portion of thebacking store 300. Without the resparsification of thebacking store 300 via theresparsification module 230, the allocation strategy would have selectedsparse blocks 310 f-i, thus desparsifying fourblocks 310 f-i instead of one, such asblock 310 f. - The
resparsification module 230 may be adapted to perform aresparsification operation 305 b on thebacking store 300. For example,resparsification module 230 may be adapted to cause the NTFS file system to notify the underlying XFS file system of the appliance 110 (which holds the backing store 135), to resparsify the deletedblocks data block 310 f. - While the
resparsification module 230 has been disclosed within the context of maintaining virtual failover volumes of target computing systems, the applicability of resparsifying a virtual volume may extend to other systems and methods that would benefit from the ability of one or more virtual machines to efficiently store data on a virtual volume. - In additional embodiments, the
application 200 may be adapted to optimize thevirtual failover volume 130 by limiting metadata updates to thevirtual failover volume 130 by the NTFS file system. - The
backing store 135 may be utilized by thevirtual machine 235 as a file (e.g., bootable disk image) that may be stored in an XFS file system on theappliance 110. Whenever data blocks are written to theappliance 110, the XFS file system may commit a metadata change to record an “mtime” (e.g., modification time) of a data block. Moreover, because somevirtual machines 230 may utilize a semi-synchronous NTFS file system within thebacking store 135, a very high quantity of 512 byte clusters may be written, each invoking a metadata update to the virtual machine XFS file system. These metadata updates cause the virtual machine XFS file system to serialize inputs and outputs behind transactional journal commits, causing deleterious performance of thevirtual machine 235. - To alleviate the ‘mtime’ updates, the
virtual machine 235 may be adapted to open files (comprised of data blocks or differential data blocks) using a virtual machine XFS internal kernel call that may open a file by way of a handle. Thevirtual machine 235 may use this method for both backup and restore functionality. Therefore, a file opened utilizing this method may allow thevirtual machine 235 to omit “mtime” updates, thereby reducing journal commits and significantly improving write performance. - Moreover the
virtual machine 235 may utilize memory efficient data block locking functionalities for asynchronous input and output actions. That is, the functionality utilized by thevirtual machine 235 may be inefficient at memory utilization, especially when locking data blocks during asynchronous input and/or output actions. - Therefore, the
virtual machines 230 utilized in accordance with the present technology may be adapted to utilize alternate lock management systems which create locks as needed and store the locked data blocks in a ‘splay-tree’ while active. The splay tree is a standard data structure that provides thevirtual machine 235 with the ability to rapidly lookup of node while modifying the root node to be near recently accessed data blocks. By storing only needed data locks in a small, fast, splay tree, the memory footprint of theappliance 110 may be reduced without an associated compromise of lookup speed. It will be understood that large virtual failover volumes may be access using this method. - According to some embodiments, upon repair of the
target computing system 105, also known as a “bare metal restore,” thevirtual machine 235 may be paused or “locked.” Pausing thevirtual machine 235 preserves the state of thevirtual failover volume 130. Moreover, the pausedvirtual failover volume 130 may be copied directly to the repaired target computing system allowing for a virtual to physical conversion of thevirtual failover volume 130 to thetarget storage medium 120 of the repaired target computing device. - Upon the occurrence of the virtual to physical operation, the bootable image file created from the
virtual failover volume 130 may be discarded and thevirtual failover volume 130 may be returned to a data state that approximates the data state of thevirtual failover volume 130 before the bootable image file was created by the obtainmirror module 210. - The
virtual failover volume 130 may then be reutilized with the repaired target computing system. - Referring now to
FIG. 4 , anexemplary method 400 for maintaining a virtual failover volume of a target computing system is shown therein. Themethod 400 may include thestep 405 of allocating a virtual failover volume having a two terabyte size, and astep 410 of formatting the virtual failover volume utilizing an NTFS file structure. - In some embodiments, the
method 400 may include thestep 415 of periodically obtaining a mirror of a target computing system on a virtual failover volume as a bootable image file. Thestep 415 may include periodically comparing the mirror to a configuration of the target computing system to determine changed data blocks relative to the mirror, and storing the changed data blocks as one or more differential files in the virtual failover volume, the one or more differential files being stored separately from the mirror. It will be understood that the periodically obtained mirrors may be stored on a virtual volume as a Windows® root file system. - Next, the
method 400 may include thestep 420 of receiving information indicative of a failover event (e.g., failure of the target computing system). Upon information indicative of a failover event, themethod 400 may include thestep 425 of rendering a bootable image file from the mirror that has been periodically revised. - Next, the
method 400 may include thestep 430 of booting the bootable image file via a virtual machine to create a virtual failover computing system. It will be understood that the configuration of the virtual failover computing system may closely approximate the configuration of the target computing system at the failover event. - In some embodiments, the
method 400 may include anoptional step 435 of rendering a bootable image file that approximates the configuration of the target computing system at an arbitrary point in time utilizing one or more mirrors and one or more differential files, rather than only utilizing the mirror. Thestep 435 may include walking the mirror back in time utilizing the one or more differential files to recreate the configuration of the target computing system as it was at the arbitrary point in time. -
FIG. 5 illustrates anexemplary computing system 500 that may be used to implement an embodiment of the present technology. Thesystem 500 ofFIG. 5 may be implemented in the contexts of thetarget computing devices 105 and theappliance 110. Thecomputing system 500 ofFIG. 5 includes one ormore processors 510 andmain memory 520.Main memory 520 stores, in part, instructions and data for execution byprocessor 510.Main memory 520 may store the executable code when in operation. Thesystem 500 ofFIG. 5 further includes amass storage device 530, portable storage medium drive(s) 540,output devices 550,user input devices 560, agraphics display 570, andperipheral devices 580. - The components shown in
FIG. 5 are depicted as being connected via asingle bus 590. The components may be connected through one or more data transport means.Processor unit 510 andmain memory 520 may be connected via a local microprocessor bus, and themass storage device 530, peripheral device(s) 580,portable storage device 540, anddisplay system 570 may be connected via one or more input/output (I/O) buses. -
Mass storage device 530, which may be implemented with a magnetic disk drive or an optical disk drive, is a non-volatile storage device for storing data and instructions for use byprocessor unit 510.Mass storage device 530 may store the system software for implementing embodiments of the present invention for purposes of loading that software intomain memory 520. -
Portable storage device 540 operates in conjunction with a portable non-volatile storage medium, such as a floppy disk, compact disk, digital video disc, or USB storage device, to input and output data and code to and from thecomputer system 500 ofFIG. 5 . The system software for implementing embodiments of the present invention may be stored on such a portable medium and input to thecomputer system 500 via theportable storage device 540. -
Input devices 560 provide a portion of a user interface.Input devices 560 may include an alphanumeric keypad, such as a keyboard, for inputting alpha-numeric and other information, or a pointing device, such as a mouse, a trackball, stylus, or cursor direction keys. Additionally, thesystem 500 as shown inFIG. 5 includesoutput devices 550. Suitable output devices include speakers, printers, network interfaces, and monitors. -
Display system 570 may include a liquid crystal display (LCD) or other suitable display device.Display system 570 receives textual and graphical information, and processes the information for output to the display device. -
Peripherals 580 may include any type of computer support device to add additional functionality to the computer system. Peripheral device(s) 580 may include a modem or a router. - The components provided in the
computer system 500 ofFIG. 5 are those typically found in computer systems that may be suitable for use with embodiments of the present invention and are intended to represent a broad category of such computer components that are well known in the art. Thus, thecomputer system 500 ofFIG. 5 may be a personal computer, hand held computing system, telephone, mobile computing system, workstation, server, minicomputer, mainframe computer, or any other computing system. The computer may also include different bus configurations, networked platforms, multi-processor platforms, etc. Various operating systems may be used including Unix, Linux, Windows, Macintosh OS, Palm OS, Android, iPhone OS and other suitable operating systems. - It is noteworthy that any hardware platform suitable for performing the processing described herein is suitable for use with the technology. Computer-readable storage media refer to any medium or media that participate in providing instructions to a central processing unit (CPU), a processor, a microcontroller, or the like. Such media may take forms including, but not limited to, non-volatile and volatile media such as optical or magnetic disks and dynamic memory, respectively. Common forms of computer-readable storage media include a floppy disk, a flexible disk, a hard disk, magnetic tape, any other magnetic storage medium, a CD-ROM disk, digital video disk (DVD), any other optical storage medium, RAM, PROM, EPROM, a FLASHEPROM, any other memory chip or cartridge.
- While various embodiments have been described above, it should be understood that they have been presented by way of example only, and not limitation. The descriptions are not intended to limit the scope of the technology to the particular forms set forth herein. Thus, the breadth and scope of a preferred embodiment should not be limited by any of the above-described exemplary embodiments. It should be understood that the above description is illustrative and not restrictive. To the contrary, the present descriptions are intended to cover such alternatives, modifications, and equivalents as may be included within the spirit and scope of the technology as defined by the appended claims and otherwise appreciated by one of ordinary skill in the art. The scope of the technology should, therefore, be determined not with reference to the above description, but instead should be determined with reference to the appended claims along with their full scope of equivalents.
Claims (21)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/929,336 US20160055062A1 (en) | 2011-02-17 | 2015-10-31 | Systems and Methods for Maintaining a Virtual Failover Volume of a Target Computing System |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/030,073 US9235474B1 (en) | 2011-02-17 | 2011-02-17 | Systems and methods for maintaining a virtual failover volume of a target computing system |
US14/929,336 US20160055062A1 (en) | 2011-02-17 | 2015-10-31 | Systems and Methods for Maintaining a Virtual Failover Volume of a Target Computing System |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/030,073 Continuation US9235474B1 (en) | 2010-09-30 | 2011-02-17 | Systems and methods for maintaining a virtual failover volume of a target computing system |
Publications (1)
Publication Number | Publication Date |
---|---|
US20160055062A1 true US20160055062A1 (en) | 2016-02-25 |
Family
ID=55026457
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/030,073 Active 2032-05-28 US9235474B1 (en) | 2010-09-30 | 2011-02-17 | Systems and methods for maintaining a virtual failover volume of a target computing system |
US14/929,336 Abandoned US20160055062A1 (en) | 2011-02-17 | 2015-10-31 | Systems and Methods for Maintaining a Virtual Failover Volume of a Target Computing System |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/030,073 Active 2032-05-28 US9235474B1 (en) | 2010-09-30 | 2011-02-17 | Systems and methods for maintaining a virtual failover volume of a target computing system |
Country Status (1)
Country | Link |
---|---|
US (2) | US9235474B1 (en) |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9559903B2 (en) | 2010-09-30 | 2017-01-31 | Axcient, Inc. | Cloud-based virtual machines and offices |
US9705730B1 (en) | 2013-05-07 | 2017-07-11 | Axcient, Inc. | Cloud storage using Merkle trees |
US9785647B1 (en) | 2012-10-02 | 2017-10-10 | Axcient, Inc. | File system virtualization |
US9852140B1 (en) | 2012-11-07 | 2017-12-26 | Axcient, Inc. | Efficient file replication |
US9998344B2 (en) | 2013-03-07 | 2018-06-12 | Efolder, Inc. | Protection status determinations for computing devices |
US10284437B2 (en) | 2010-09-30 | 2019-05-07 | Efolder, Inc. | Cloud-based virtual machines and offices |
CN112398699A (en) * | 2020-12-01 | 2021-02-23 | 杭州迪普科技股份有限公司 | Network traffic packet capturing method, device and equipment |
US11150845B2 (en) | 2019-11-01 | 2021-10-19 | EMC IP Holding Company LLC | Methods and systems for servicing data requests in a multi-node system |
US11288211B2 (en) | 2019-11-01 | 2022-03-29 | EMC IP Holding Company LLC | Methods and systems for optimizing storage resources |
US11294725B2 (en) | 2019-11-01 | 2022-04-05 | EMC IP Holding Company LLC | Method and system for identifying a preferred thread pool associated with a file system |
US11392464B2 (en) * | 2019-11-01 | 2022-07-19 | EMC IP Holding Company LLC | Methods and systems for mirroring and failover of nodes |
CN115698954A (en) * | 2020-03-27 | 2023-02-03 | 亚马逊技术有限公司 | Managing failover area availability to implement failover services |
US12066906B2 (en) | 2020-03-27 | 2024-08-20 | Amazon Technologies, Inc. | Managing failover region availability for implementing a failover service |
Families Citing this family (37)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8307177B2 (en) | 2008-09-05 | 2012-11-06 | Commvault Systems, Inc. | Systems and methods for management of virtualization data |
US11449394B2 (en) | 2010-06-04 | 2022-09-20 | Commvault Systems, Inc. | Failover systems and methods for performing backup operations, including heterogeneous indexing and load balancing of backup and indexing resources |
US9817739B1 (en) * | 2012-10-31 | 2017-11-14 | Veritas Technologies Llc | Method to restore a virtual environment based on a state of applications/tiers |
US20140181044A1 (en) | 2012-12-21 | 2014-06-26 | Commvault Systems, Inc. | Systems and methods to identify uncharacterized and unprotected virtual machines |
US9286086B2 (en) | 2012-12-21 | 2016-03-15 | Commvault Systems, Inc. | Archiving virtual machines in a data storage system |
US20140196038A1 (en) | 2013-01-08 | 2014-07-10 | Commvault Systems, Inc. | Virtual machine management in a data storage system |
US11526403B1 (en) * | 2013-08-23 | 2022-12-13 | Acronis International Gmbh | Using a storage path to facilitate disaster recovery |
US20150074536A1 (en) | 2013-09-12 | 2015-03-12 | Commvault Systems, Inc. | File manager integration with virtualization in an information management system, including user control and storage management of virtual machines |
US20150269029A1 (en) * | 2014-03-20 | 2015-09-24 | Unitrends, Inc. | Immediate Recovery of an Application from File Based Backups |
US9811427B2 (en) | 2014-04-02 | 2017-11-07 | Commvault Systems, Inc. | Information management by a media agent in the absence of communications with a storage manager |
US9454439B2 (en) | 2014-05-28 | 2016-09-27 | Unitrends, Inc. | Disaster recovery validation |
US9448834B2 (en) | 2014-06-27 | 2016-09-20 | Unitrends, Inc. | Automated testing of physical servers using a virtual machine |
US20160019317A1 (en) | 2014-07-16 | 2016-01-21 | Commvault Systems, Inc. | Volume or virtual machine level backup and generating placeholders for virtual machine files |
US9417968B2 (en) | 2014-09-22 | 2016-08-16 | Commvault Systems, Inc. | Efficiently restoring execution of a backed up virtual machine based on coordination with virtual-machine-file-relocation operations |
US9710465B2 (en) | 2014-09-22 | 2017-07-18 | Commvault Systems, Inc. | Efficiently restoring execution of a backed up virtual machine based on coordination with virtual-machine-file-relocation operations |
US9436555B2 (en) | 2014-09-22 | 2016-09-06 | Commvault Systems, Inc. | Efficient live-mount of a backed up virtual machine in a storage management system |
US10776209B2 (en) | 2014-11-10 | 2020-09-15 | Commvault Systems, Inc. | Cross-platform virtual machine backup and replication |
US9983936B2 (en) | 2014-11-20 | 2018-05-29 | Commvault Systems, Inc. | Virtual machine change block tracking |
US9747178B2 (en) | 2015-08-26 | 2017-08-29 | Netapp, Inc. | Configuration inconsistency identification between storage virtual machines |
CN106933493B (en) * | 2015-12-30 | 2020-04-24 | 伊姆西Ip控股有限责任公司 | Method and equipment for capacity expansion of cache disk array |
US10565067B2 (en) | 2016-03-09 | 2020-02-18 | Commvault Systems, Inc. | Virtual server cloud file system for virtual machine backup from cloud operations |
US10747630B2 (en) | 2016-09-30 | 2020-08-18 | Commvault Systems, Inc. | Heartbeat monitoring of virtual machines for initiating failover operations in a data storage management system, including operations by a master monitor node |
US10162528B2 (en) | 2016-10-25 | 2018-12-25 | Commvault Systems, Inc. | Targeted snapshot based on virtual machine location |
US10678758B2 (en) | 2016-11-21 | 2020-06-09 | Commvault Systems, Inc. | Cross-platform virtual machine data and memory backup and replication |
US20180276022A1 (en) | 2017-03-24 | 2018-09-27 | Commvault Systems, Inc. | Consistent virtual machine replication |
US10387073B2 (en) * | 2017-03-29 | 2019-08-20 | Commvault Systems, Inc. | External dynamic virtual machine synchronization |
US10877928B2 (en) | 2018-03-07 | 2020-12-29 | Commvault Systems, Inc. | Using utilities injected into cloud-based virtual machines for speeding up virtual machine backup operations |
US11200124B2 (en) | 2018-12-06 | 2021-12-14 | Commvault Systems, Inc. | Assigning backup resources based on failover of partnered data storage servers in a data storage management system |
US10768971B2 (en) | 2019-01-30 | 2020-09-08 | Commvault Systems, Inc. | Cross-hypervisor live mount of backed up virtual machine data |
US10996974B2 (en) | 2019-01-30 | 2021-05-04 | Commvault Systems, Inc. | Cross-hypervisor live mount of backed up virtual machine data, including management of cache storage for virtual machine data |
US11782610B2 (en) * | 2020-01-30 | 2023-10-10 | Seagate Technology Llc | Write and compare only data storage |
US11467753B2 (en) | 2020-02-14 | 2022-10-11 | Commvault Systems, Inc. | On-demand restore of virtual machine data |
US11392418B2 (en) * | 2020-02-21 | 2022-07-19 | International Business Machines Corporation | Adaptive pacing setting for workload execution |
US11442768B2 (en) | 2020-03-12 | 2022-09-13 | Commvault Systems, Inc. | Cross-hypervisor live recovery of virtual machines |
US11099956B1 (en) | 2020-03-26 | 2021-08-24 | Commvault Systems, Inc. | Snapshot-based disaster recovery orchestration of virtual machine failover and failback operations |
US11748143B2 (en) | 2020-05-15 | 2023-09-05 | Commvault Systems, Inc. | Live mount of virtual machines in a public cloud computing environment |
US11656951B2 (en) | 2020-10-28 | 2023-05-23 | Commvault Systems, Inc. | Data loss vulnerability detection |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030158873A1 (en) * | 2002-02-15 | 2003-08-21 | International Business Machines Corporation | Dynamic links to file system snapshots |
US20040030852A1 (en) * | 2002-03-18 | 2004-02-12 | Coombs David Lawrence | System and method for data backup |
US20080154979A1 (en) * | 2006-12-21 | 2008-06-26 | International Business Machines Corporation | Apparatus, system, and method for creating a backup schedule in a san environment based on a recovery plan |
US7620765B1 (en) * | 2006-12-15 | 2009-11-17 | Symantec Operating Corporation | Method to delete partial virtual tape volumes |
US7720819B2 (en) * | 2007-04-12 | 2010-05-18 | International Business Machines Corporation | Method and apparatus combining revision based and time based file data protection |
US20110005529A1 (en) * | 2004-12-08 | 2011-01-13 | Rajiv Doshi | Methods of treating a sleeping subject |
US20110218966A1 (en) * | 2010-03-02 | 2011-09-08 | Storagecraft Technology Corp. | Systems, methods, and computer-readable media for backup and restoration of computer information |
US20110295811A1 (en) * | 2010-06-01 | 2011-12-01 | Ludmila Cherkasova | Changing a number of disk agents to backup objects to a storage device |
US20120130956A1 (en) * | 2010-09-30 | 2012-05-24 | Vito Caputo | Systems and Methods for Restoring a File |
US8775549B1 (en) * | 2007-09-27 | 2014-07-08 | Emc Corporation | Methods, systems, and computer program products for automatically adjusting a data replication rate based on a specified quality of service (QoS) level |
US8898108B2 (en) * | 2009-01-14 | 2014-11-25 | Vmware, Inc. | System and method for scheduling data storage replication over a network |
Family Cites Families (159)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3497886B2 (en) | 1994-05-10 | 2004-02-16 | 富士通株式会社 | Server data linking device |
US5574905A (en) | 1994-05-26 | 1996-11-12 | International Business Machines Corporation | Method and apparatus for multimedia editing and data recovery |
US6272492B1 (en) | 1997-11-21 | 2001-08-07 | Ibm Corporation | Front-end proxy for transparently increasing web server functionality |
US9292111B2 (en) | 1998-01-26 | 2016-03-22 | Apple Inc. | Gesturing with a multipoint sensing device |
US6205527B1 (en) * | 1998-02-24 | 2001-03-20 | Adaptec, Inc. | Intelligent backup and restoring system and method for implementing the same |
US6122629A (en) | 1998-04-30 | 2000-09-19 | Compaq Computer Corporation | Filesystem data integrity in a single system image environment |
US6604236B1 (en) | 1998-06-30 | 2003-08-05 | Iora, Ltd. | System and method for generating file updates for files stored on read-only media |
US6233589B1 (en) * | 1998-07-31 | 2001-05-15 | Novell, Inc. | Method and system for reflecting differences between two files |
EP0981099A3 (en) | 1998-08-17 | 2004-04-21 | Connected Place Limited | A method of and an apparatus for merging a sequence of delta files |
WO2001006374A2 (en) * | 1999-07-16 | 2001-01-25 | Intertrust Technologies Corp. | System and method for securing an untrusted storage |
AU2001229332A1 (en) * | 2000-01-10 | 2001-07-24 | Connected Corporation | Administration of a differential backup system in a client-server environment |
US6651075B1 (en) * | 2000-02-16 | 2003-11-18 | Microsoft Corporation | Support for multiple temporal snapshots of same volume |
WO2001082098A1 (en) | 2000-04-27 | 2001-11-01 | Fortress Technologies, Inc. | Network interface device having primary and backup interfaces for automatic dial backup upon loss of a primary connection and method of using same |
US6971018B1 (en) * | 2000-04-28 | 2005-11-29 | Microsoft Corporation | File protection service for a computer system |
EP1168174A1 (en) * | 2000-06-19 | 2002-01-02 | Hewlett-Packard Company, A Delaware Corporation | Automatic backup/recovery process |
US6918091B2 (en) | 2000-11-09 | 2005-07-12 | Change Tools, Inc. | User definable interface system, method and computer program product |
EP1374093B1 (en) | 2001-03-27 | 2013-07-03 | BRITISH TELECOMMUNICATIONS public limited company | File synchronisation |
US20030011638A1 (en) | 2001-07-10 | 2003-01-16 | Sun-Woo Chung | Pop-up menu system |
US6877048B2 (en) | 2002-03-12 | 2005-04-05 | International Business Machines Corporation | Dynamic memory allocation between inbound and outbound buffers in a protocol handler |
US7051050B2 (en) * | 2002-03-19 | 2006-05-23 | Netwrok Appliance, Inc. | System and method for restoring a single file from a snapshot |
US7058656B2 (en) * | 2002-04-11 | 2006-06-06 | Sun Microsystems, Inc. | System and method of using extensions in a data structure without interfering with applications unaware of the extensions |
US7058902B2 (en) | 2002-07-30 | 2006-06-06 | Microsoft Corporation | Enhanced on-object context menus |
US7024581B1 (en) * | 2002-10-09 | 2006-04-04 | Xpoint Technologies, Inc. | Data processing recovery system and method spanning multiple operating system |
US7055010B2 (en) * | 2002-11-06 | 2006-05-30 | Synology Inc. | Snapshot facility allowing preservation of chronological views on block drives |
US7624143B2 (en) | 2002-12-12 | 2009-11-24 | Xerox Corporation | Methods, apparatus, and program products for utilizing contextual property metadata in networked computing environments |
US7809693B2 (en) * | 2003-02-10 | 2010-10-05 | Netapp, Inc. | System and method for restoring data on demand for instant volume restoration |
US7320009B1 (en) | 2003-03-28 | 2008-01-15 | Novell, Inc. | Methods and systems for file replication utilizing differences between versions of files |
US7328366B2 (en) | 2003-06-06 | 2008-02-05 | Cascade Basic Research Corp. | Method and system for reciprocal data backup |
US20050010835A1 (en) * | 2003-07-11 | 2005-01-13 | International Business Machines Corporation | Autonomic non-invasive backup and storage appliance |
US7398285B2 (en) | 2003-07-30 | 2008-07-08 | International Business Machines Corporation | Apparatus and system for asynchronous replication of a hierarchically-indexed data store |
US20050193235A1 (en) | 2003-08-05 | 2005-09-01 | Miklos Sandorfi | Emulated storage system |
US7225208B2 (en) | 2003-09-30 | 2007-05-29 | Iron Mountain Incorporated | Systems and methods for backing up data files |
JP4267420B2 (en) * | 2003-10-20 | 2009-05-27 | 株式会社日立製作所 | Storage apparatus and backup acquisition method |
JP4319017B2 (en) * | 2003-12-02 | 2009-08-26 | 株式会社日立製作所 | Storage system control method, storage system, and storage device |
US20050152192A1 (en) * | 2003-12-22 | 2005-07-14 | Manfred Boldy | Reducing occupancy of digital storage devices |
US7315965B2 (en) * | 2004-02-04 | 2008-01-01 | Network Appliance, Inc. | Method and system for storing data using a continuous data protection system |
US7406488B2 (en) * | 2004-02-04 | 2008-07-29 | Netapp | Method and system for maintaining data in a continuous data protection system |
US7966293B1 (en) | 2004-03-09 | 2011-06-21 | Netapp, Inc. | System and method for indexing a backup using persistent consistency point images |
US7277905B2 (en) * | 2004-03-31 | 2007-10-02 | Microsoft Corporation | System and method for a consistency check of a database backup |
US7266655B1 (en) | 2004-04-29 | 2007-09-04 | Veritas Operating Corporation | Synthesized backup set catalog |
US7356729B2 (en) * | 2004-06-14 | 2008-04-08 | Lucent Technologies Inc. | Restoration of network element through employment of bootable image |
US20060013462A1 (en) | 2004-07-15 | 2006-01-19 | Navid Sadikali | Image display system and method |
US7389314B2 (en) | 2004-08-30 | 2008-06-17 | Corio, Inc. | Database backup, refresh and cloning system and method |
US7979404B2 (en) | 2004-09-17 | 2011-07-12 | Quest Software, Inc. | Extracting data changes and storing data history to allow for instantaneous access to and reconstruction of any point-in-time data |
JP4325524B2 (en) | 2004-09-29 | 2009-09-02 | 日本電気株式会社 | Switch device and system, backup and restore method and program |
US7546323B1 (en) | 2004-09-30 | 2009-06-09 | Emc Corporation | System and methods for managing backup status reports |
US7401192B2 (en) | 2004-10-04 | 2008-07-15 | International Business Machines Corporation | Method of replicating a file using a base, delta, and reference file |
KR101277016B1 (en) | 2004-11-05 | 2013-07-30 | 텔코디아 테크놀로지스, 인코포레이티드 | Network discovery mechanism |
US7814057B2 (en) | 2005-04-05 | 2010-10-12 | Microsoft Corporation | Page recovery using volume snapshots and logs |
US8064459B2 (en) * | 2005-07-18 | 2011-11-22 | Broadcom Israel Research Ltd. | Method and system for transparent TCP offload with transmit and receive coupling |
US7743038B1 (en) * | 2005-08-24 | 2010-06-22 | Lsi Corporation | Inode based policy identifiers in a filing system |
US20070112895A1 (en) | 2005-11-04 | 2007-05-17 | Sun Microsystems, Inc. | Block-based incremental backup |
US7730425B2 (en) | 2005-11-30 | 2010-06-01 | De Los Reyes Isabelo | Function-oriented user interface |
US20070204153A1 (en) | 2006-01-04 | 2007-08-30 | Tome Agustin J | Trusted host platform |
US20070180207A1 (en) | 2006-01-18 | 2007-08-02 | International Business Machines Corporation | Secure RFID backup/restore for computing/pervasive devices |
US7667686B2 (en) | 2006-02-01 | 2010-02-23 | Memsic, Inc. | Air-writing and motion sensing input for portable devices |
US7676763B2 (en) | 2006-02-21 | 2010-03-09 | Sap Ag | Method and system for providing an outwardly expandable radial menu |
US20070208918A1 (en) * | 2006-03-01 | 2007-09-06 | Kenneth Harbin | Method and apparatus for providing virtual machine backup |
US20070220029A1 (en) | 2006-03-17 | 2007-09-20 | Novell, Inc. | System and method for hierarchical storage management using shadow volumes |
JP4911576B2 (en) * | 2006-03-24 | 2012-04-04 | 株式会社メガチップス | Information processing apparatus and write-once memory utilization method |
US7650369B2 (en) * | 2006-03-30 | 2010-01-19 | Fujitsu Limited | Database system management method and database system |
US7552044B2 (en) * | 2006-04-21 | 2009-06-23 | Microsoft Corporation | Simulated storage area network |
US7653832B2 (en) | 2006-05-08 | 2010-01-26 | Emc Corporation | Storage array virtualization using a storage block mapping protocol client and server |
US7945726B2 (en) | 2006-05-08 | 2011-05-17 | Emc Corporation | Pre-allocation and hierarchical mapping of data blocks distributed from a first processor to a second processor for use in a file system |
US8949312B2 (en) * | 2006-05-25 | 2015-02-03 | Red Hat, Inc. | Updating clients from a server |
US7568124B2 (en) | 2006-06-02 | 2009-07-28 | Microsoft Corporation | Driving data backups with data source tagging |
US8302091B2 (en) * | 2006-06-05 | 2012-10-30 | International Business Machines Corporation | Installation of a bootable image for modifying the operational environment of a computing system |
US7624134B2 (en) | 2006-06-12 | 2009-11-24 | International Business Machines Corporation | Enabling access to remote storage for use with a backup program |
US7873601B1 (en) * | 2006-06-29 | 2011-01-18 | Emc Corporation | Backup of incremental metadata in block based backup systems |
JP2008015768A (en) * | 2006-07-05 | 2008-01-24 | Hitachi Ltd | Storage system and data management method using the same |
US7783956B2 (en) | 2006-07-12 | 2010-08-24 | Cronera Systems Incorporated | Data recorder |
US20080027998A1 (en) | 2006-07-27 | 2008-01-31 | Hitachi, Ltd. | Method and apparatus of continuous data protection for NAS |
US7809688B2 (en) | 2006-08-04 | 2010-10-05 | Apple Inc. | Managing backup of content |
US7752487B1 (en) | 2006-08-08 | 2010-07-06 | Open Invention Network, Llc | System and method for managing group policy backup |
AU2007295949B2 (en) | 2006-09-12 | 2009-08-06 | Adams Consulting Group Pty. Ltd. | Method system and apparatus for handling information |
US8332442B1 (en) | 2006-09-26 | 2012-12-11 | Symantec Corporation | Automated restoration of links when restoring individual directory service objects |
US7769731B2 (en) | 2006-10-04 | 2010-08-03 | International Business Machines Corporation | Using file backup software to generate an alert when a file modification policy is violated |
US7832008B1 (en) | 2006-10-11 | 2010-11-09 | Cisco Technology, Inc. | Protection of computer resources |
US8117163B2 (en) | 2006-10-31 | 2012-02-14 | Carbonite, Inc. | Backup and restore system for a computer |
JP4459215B2 (en) * | 2006-11-09 | 2010-04-28 | 株式会社ソニー・コンピュータエンタテインメント | GAME DEVICE AND INFORMATION PROCESSING DEVICE |
US8280978B2 (en) | 2006-12-29 | 2012-10-02 | Prodea Systems, Inc. | Demarcation between service provider and user in multi-services gateway device at user premises |
US8880480B2 (en) * | 2007-01-03 | 2014-11-04 | Oracle International Corporation | Method and apparatus for data rollback |
US7647338B2 (en) | 2007-02-21 | 2010-01-12 | Microsoft Corporation | Content item query formulation |
US20080229050A1 (en) | 2007-03-13 | 2008-09-18 | Sony Ericsson Mobile Communications Ab | Dynamic page on demand buffer size for power savings |
US7974950B2 (en) | 2007-06-05 | 2011-07-05 | International Business Machines Corporation | Applying a policy criteria to files in a backup image |
US8010900B2 (en) | 2007-06-08 | 2011-08-30 | Apple Inc. | User interface for electronic backup |
US8676273B1 (en) | 2007-08-24 | 2014-03-18 | Iwao Fujisaki | Communication device |
US8117164B2 (en) | 2007-12-19 | 2012-02-14 | Microsoft Corporation | Creating and utilizing network restore points |
US9503354B2 (en) | 2008-01-17 | 2016-11-22 | Aerohive Networks, Inc. | Virtualization of networking services |
JP2009205333A (en) * | 2008-02-27 | 2009-09-10 | Hitachi Ltd | Computer system, storage device, and data management method |
JP4413976B2 (en) | 2008-05-23 | 2010-02-10 | 株式会社東芝 | Information processing apparatus and version upgrade method for information processing apparatus |
US20090319653A1 (en) * | 2008-06-20 | 2009-12-24 | International Business Machines Corporation | Server configuration management method |
US8245156B2 (en) | 2008-06-28 | 2012-08-14 | Apple Inc. | Radial menu selection |
US8826181B2 (en) | 2008-06-28 | 2014-09-02 | Apple Inc. | Moving radial menus |
US8060476B1 (en) * | 2008-07-14 | 2011-11-15 | Quest Software, Inc. | Backup systems and methods for a virtual computing environment |
US8103718B2 (en) | 2008-07-31 | 2012-01-24 | Microsoft Corporation | Content discovery and transfer between mobile communications nodes |
US8117410B2 (en) * | 2008-08-25 | 2012-02-14 | Vmware, Inc. | Tracking block-level changes using snapshots |
US8279174B2 (en) | 2008-08-27 | 2012-10-02 | Lg Electronics Inc. | Display device and method of controlling the display device |
US8099572B1 (en) * | 2008-09-30 | 2012-01-17 | Emc Corporation | Efficient backup and restore of storage objects in a version set |
US20100104105A1 (en) | 2008-10-23 | 2010-04-29 | Digital Cinema Implementation Partners, Llc | Digital cinema asset management system |
US8495624B2 (en) | 2008-10-23 | 2013-07-23 | International Business Machines Corporation | Provisioning a suitable operating system environment |
US20100114832A1 (en) | 2008-10-31 | 2010-05-06 | Lillibridge Mark D | Forensic snapshot |
US20100179973A1 (en) | 2008-12-31 | 2010-07-15 | Herve Carruzzo | Systems, methods, and computer programs for delivering content via a communications network |
US9383897B2 (en) | 2009-01-29 | 2016-07-05 | International Business Machines Corporation | Spiraling radial menus in computer systems |
US8352717B2 (en) * | 2009-02-09 | 2013-01-08 | Cs-Solutions, Inc. | Recovery system using selectable and configurable snapshots |
US8504785B1 (en) | 2009-03-10 | 2013-08-06 | Symantec Corporation | Method and apparatus for backing up to tape drives with minimum write speed |
US8370835B2 (en) * | 2009-03-12 | 2013-02-05 | Arend Erich Dittmer | Method for dynamically generating a configuration for a virtual machine with a virtual hard disk in an external storage device |
US8099391B1 (en) * | 2009-03-17 | 2012-01-17 | Symantec Corporation | Incremental and differential backups of virtual machine files |
US8260742B2 (en) | 2009-04-03 | 2012-09-04 | International Business Machines Corporation | Data synchronization and consistency across distributed repositories |
JP5317807B2 (en) * | 2009-04-13 | 2013-10-16 | 株式会社日立製作所 | File control system and file control computer used therefor |
US20100268689A1 (en) | 2009-04-15 | 2010-10-21 | Gates Matthew S | Providing information relating to usage of a simulated snapshot |
US8601389B2 (en) | 2009-04-30 | 2013-12-03 | Apple Inc. | Scrollable menus and toolbars |
US8200926B1 (en) | 2009-05-28 | 2012-06-12 | Symantec Corporation | Methods and systems for creating full backups |
US8549432B2 (en) | 2009-05-29 | 2013-10-01 | Apple Inc. | Radial menus |
US8321688B2 (en) | 2009-06-12 | 2012-11-27 | Microsoft Corporation | Secure and private backup storage and processing for trusted computing and data services |
US20100332401A1 (en) * | 2009-06-30 | 2010-12-30 | Anand Prahlad | Performing data storage operations with a cloud storage environment, including automatically selecting among multiple cloud storage sites |
US8244914B1 (en) | 2009-07-31 | 2012-08-14 | Symantec Corporation | Systems and methods for restoring email databases |
JP2011039804A (en) | 2009-08-12 | 2011-02-24 | Hitachi Ltd | Backup management method based on failure contents |
US8209568B2 (en) | 2009-08-21 | 2012-06-26 | Novell, Inc. | System and method for implementing an intelligent backup technique for cluster resources |
US20110055471A1 (en) | 2009-08-28 | 2011-03-03 | Jonathan Thatcher | Apparatus, system, and method for improved data deduplication |
US8335784B2 (en) | 2009-08-31 | 2012-12-18 | Microsoft Corporation | Visual search and three-dimensional results |
US9086928B2 (en) | 2009-08-31 | 2015-07-21 | Accenture Global Services Limited | Provisioner within cloud console—defining images of an enterprise to be operable on different cloud computing providers |
US8645647B2 (en) * | 2009-09-02 | 2014-02-04 | International Business Machines Corporation | Data storage snapshot with reduced copy-on-write |
JP2013011919A (en) * | 2009-09-17 | 2013-01-17 | Hitachi Ltd | Storage apparatus and snapshot control method of the same |
US8589913B2 (en) * | 2009-10-14 | 2013-11-19 | Vmware, Inc. | Tracking block-level writes |
US8112505B1 (en) | 2009-10-20 | 2012-02-07 | Wanova Technologies, Ltd. | On-demand block-level file system streaming to remote desktops |
US8856080B2 (en) * | 2009-10-30 | 2014-10-07 | Microsoft Corporation | Backup using metadata virtual hard drive and differential virtual hard drive |
US8296410B1 (en) | 2009-11-06 | 2012-10-23 | Carbonite, Inc. | Bandwidth management in a client/server environment |
US8572337B1 (en) | 2009-12-14 | 2013-10-29 | Symantec Corporation | Systems and methods for performing live backups |
US9465532B2 (en) | 2009-12-18 | 2016-10-11 | Synaptics Incorporated | Method and apparatus for operating in pointing and enhanced gesturing modes |
CA2794339C (en) | 2010-03-26 | 2017-02-21 | Carbonite, Inc. | Transfer of user data between logical data sites |
CA2794341C (en) | 2010-03-29 | 2017-09-12 | Carbonite, Inc. | Managing backup sets based on user feedback |
US8935212B2 (en) | 2010-03-29 | 2015-01-13 | Carbonite, Inc. | Discovery of non-standard folders for backup |
US8037345B1 (en) * | 2010-03-31 | 2011-10-11 | Emc Corporation | Deterministic recovery of a file system built on a thinly provisioned logical volume having redundant metadata |
US8566354B2 (en) | 2010-04-26 | 2013-10-22 | Cleversafe, Inc. | Storage and retrieval of required slices in a dispersed storage network |
US8224935B1 (en) | 2010-05-12 | 2012-07-17 | Symantec Corporation | Systems and methods for efficiently synchronizing configuration data within distributed computing systems |
US20130091183A1 (en) * | 2010-06-15 | 2013-04-11 | Nigel Edwards | Volume Management |
US8773370B2 (en) | 2010-07-13 | 2014-07-08 | Apple Inc. | Table editing systems with gesture-based insertion and deletion of columns and rows |
US20120065802A1 (en) | 2010-09-14 | 2012-03-15 | Joulex, Inc. | System and methods for automatic power management of remote electronic devices using a mobile device |
US8606752B1 (en) | 2010-09-29 | 2013-12-10 | Symantec Corporation | Method and system of restoring items to a database while maintaining referential integrity |
US8589350B1 (en) | 2012-04-02 | 2013-11-19 | Axcient, Inc. | Systems, methods, and media for synthesizing views of file system backups |
US8954544B2 (en) | 2010-09-30 | 2015-02-10 | Axcient, Inc. | Cloud-based virtual machines and offices |
JP5816424B2 (en) | 2010-10-05 | 2015-11-18 | 富士通株式会社 | Information processing device, tape device, and program |
US8904126B2 (en) * | 2010-11-16 | 2014-12-02 | Actifio, Inc. | System and method for performing a plurality of prescribed data management functions in a manner that reduces redundant access operations to primary storage |
US8495262B2 (en) * | 2010-11-23 | 2013-07-23 | International Business Machines Corporation | Using a table to determine if user buffer is marked copy-on-write |
US8635187B2 (en) * | 2011-01-07 | 2014-01-21 | Symantec Corporation | Method and system of performing incremental SQL server database backups |
US8412680B1 (en) | 2011-01-20 | 2013-04-02 | Commvault Systems, Inc | System and method for performing backup operations and reporting the results thereof |
US8510597B2 (en) * | 2011-02-08 | 2013-08-13 | Wisconsin Alumni Research Foundation | Providing restartable file systems within computing devices |
US20120210398A1 (en) | 2011-02-14 | 2012-08-16 | Bank Of America Corporation | Enhanced Backup and Retention Management |
US8621274B1 (en) | 2011-05-18 | 2013-12-31 | Netapp Inc. | Virtual machine fault tolerance |
WO2013086040A2 (en) | 2011-12-05 | 2013-06-13 | Doyenz Incorporated | Universal pluggable cloud disaster recovery system |
US8600947B1 (en) | 2011-12-08 | 2013-12-03 | Symantec Corporation | Systems and methods for providing backup interfaces |
US20130166511A1 (en) | 2011-12-21 | 2013-06-27 | International Business Machines Corporation | Determining an overall assessment of a likelihood of a backup set resulting in a successful restore |
KR101930263B1 (en) | 2012-03-12 | 2018-12-18 | 삼성전자주식회사 | Apparatus and method for managing contents in a cloud gateway |
US9274897B2 (en) | 2012-05-25 | 2016-03-01 | Symantec Corporation | Backup policy migration and image duplication |
US20140089619A1 (en) | 2012-09-27 | 2014-03-27 | Infinera Corporation | Object replication framework for a distributed computing environment |
US20140149358A1 (en) | 2012-11-29 | 2014-05-29 | Longsand Limited | Configuring computing devices using a template |
US9021452B2 (en) | 2012-12-27 | 2015-04-28 | Commvault Systems, Inc. | Automatic identification of storage requirements, such as for use in selling data storage management solutions |
US9031829B2 (en) | 2013-02-08 | 2015-05-12 | Machine Zone, Inc. | Systems and methods for multi-user multi-lingual communications |
-
2011
- 2011-02-17 US US13/030,073 patent/US9235474B1/en active Active
-
2015
- 2015-10-31 US US14/929,336 patent/US20160055062A1/en not_active Abandoned
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030158873A1 (en) * | 2002-02-15 | 2003-08-21 | International Business Machines Corporation | Dynamic links to file system snapshots |
US20040030852A1 (en) * | 2002-03-18 | 2004-02-12 | Coombs David Lawrence | System and method for data backup |
US20110005529A1 (en) * | 2004-12-08 | 2011-01-13 | Rajiv Doshi | Methods of treating a sleeping subject |
US7620765B1 (en) * | 2006-12-15 | 2009-11-17 | Symantec Operating Corporation | Method to delete partial virtual tape volumes |
US20080154979A1 (en) * | 2006-12-21 | 2008-06-26 | International Business Machines Corporation | Apparatus, system, and method for creating a backup schedule in a san environment based on a recovery plan |
US7720819B2 (en) * | 2007-04-12 | 2010-05-18 | International Business Machines Corporation | Method and apparatus combining revision based and time based file data protection |
US8775549B1 (en) * | 2007-09-27 | 2014-07-08 | Emc Corporation | Methods, systems, and computer program products for automatically adjusting a data replication rate based on a specified quality of service (QoS) level |
US8898108B2 (en) * | 2009-01-14 | 2014-11-25 | Vmware, Inc. | System and method for scheduling data storage replication over a network |
US20110218966A1 (en) * | 2010-03-02 | 2011-09-08 | Storagecraft Technology Corp. | Systems, methods, and computer-readable media for backup and restoration of computer information |
US20110295811A1 (en) * | 2010-06-01 | 2011-12-01 | Ludmila Cherkasova | Changing a number of disk agents to backup objects to a storage device |
US20120130956A1 (en) * | 2010-09-30 | 2012-05-24 | Vito Caputo | Systems and Methods for Restoring a File |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10284437B2 (en) | 2010-09-30 | 2019-05-07 | Efolder, Inc. | Cloud-based virtual machines and offices |
US9559903B2 (en) | 2010-09-30 | 2017-01-31 | Axcient, Inc. | Cloud-based virtual machines and offices |
US9785647B1 (en) | 2012-10-02 | 2017-10-10 | Axcient, Inc. | File system virtualization |
US11169714B1 (en) | 2012-11-07 | 2021-11-09 | Efolder, Inc. | Efficient file replication |
US9852140B1 (en) | 2012-11-07 | 2017-12-26 | Axcient, Inc. | Efficient file replication |
US9998344B2 (en) | 2013-03-07 | 2018-06-12 | Efolder, Inc. | Protection status determinations for computing devices |
US10003646B1 (en) | 2013-03-07 | 2018-06-19 | Efolder, Inc. | Protection status determinations for computing devices |
US9705730B1 (en) | 2013-05-07 | 2017-07-11 | Axcient, Inc. | Cloud storage using Merkle trees |
US10599533B2 (en) | 2013-05-07 | 2020-03-24 | Efolder, Inc. | Cloud storage using merkle trees |
US11294725B2 (en) | 2019-11-01 | 2022-04-05 | EMC IP Holding Company LLC | Method and system for identifying a preferred thread pool associated with a file system |
US11150845B2 (en) | 2019-11-01 | 2021-10-19 | EMC IP Holding Company LLC | Methods and systems for servicing data requests in a multi-node system |
US11288211B2 (en) | 2019-11-01 | 2022-03-29 | EMC IP Holding Company LLC | Methods and systems for optimizing storage resources |
US11392464B2 (en) * | 2019-11-01 | 2022-07-19 | EMC IP Holding Company LLC | Methods and systems for mirroring and failover of nodes |
CN115698954A (en) * | 2020-03-27 | 2023-02-03 | 亚马逊技术有限公司 | Managing failover area availability to implement failover services |
US12066906B2 (en) | 2020-03-27 | 2024-08-20 | Amazon Technologies, Inc. | Managing failover region availability for implementing a failover service |
CN112398699A (en) * | 2020-12-01 | 2021-02-23 | 杭州迪普科技股份有限公司 | Network traffic packet capturing method, device and equipment |
Also Published As
Publication number | Publication date |
---|---|
US9235474B1 (en) | 2016-01-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9235474B1 (en) | Systems and methods for maintaining a virtual failover volume of a target computing system | |
US11789823B2 (en) | Selective processing of file system objects for image level backups | |
US10503604B2 (en) | Virtual machine data protection | |
US9563513B2 (en) | O(1) virtual machine (VM) snapshot management | |
US9639432B2 (en) | Live rollback for a computing environment | |
US8868860B2 (en) | Restore in cascaded copy environment | |
JP5512760B2 (en) | High efficiency portable archive | |
US8856591B2 (en) | System and method for data disaster recovery | |
US20070180206A1 (en) | Method of updating a duplicate copy of an operating system on the same disk | |
US10592354B2 (en) | Configurable recovery states | |
US20130325810A1 (en) | Creation and expiration of backup objects in block-level incremental-forever backup systems | |
US11144233B1 (en) | Efficiently managing point-in-time copies of data within a primary storage system | |
US7383466B2 (en) | Method and system of previewing a volume revert operation | |
US20180341561A1 (en) | Determining modified portions of a raid storage array | |
US10324807B1 (en) | Fast native file system creation for backup files on deduplication systems | |
US10976952B2 (en) | System and method for orchestrated application protection | |
US10564894B2 (en) | Free space pass-through | |
US20220011938A1 (en) | System and method for selectively restoring data | |
EP3591531B1 (en) | Instant restore and instant access of hyper-v vms and applications running inside vms using data domain boostfs | |
US20240202332A1 (en) | System and Method for Ransomware Scan Using Incremental Data Blocks | |
US11675668B2 (en) | Leveraging a cloud-based object storage to efficiently manage data from a failed backup operation | |
US11755425B1 (en) | Methods and systems for synchronous distributed data backup and metadata aggregation | |
US11099948B2 (en) | Persistent storage segment caching for data recovery | |
US10078641B1 (en) | Optimized lock detection in a change block tracker |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: AXCIENT, INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:PETRI, ROBERT;LALONDE, ERIC;CAPUTO, VITO;REEL/FRAME:037387/0660 Effective date: 20110217 |
|
AS | Assignment |
Owner name: STRUCTURED ALPHA LP, CANADA Free format text: SECURITY INTEREST;ASSIGNOR:AXCIENT, INC.;REEL/FRAME:042542/0364 Effective date: 20170530 |
|
AS | Assignment |
Owner name: SILVER LAKE WATERMAN FUND, L.P., CALIFORNIA Free format text: SECURITY INTEREST;ASSIGNOR:AXCIENT, INC.;REEL/FRAME:042577/0901 Effective date: 20170530 |
|
AS | Assignment |
Owner name: AXCIENT, INC., CALIFORNIA Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:SILVER LAKE WATERMAN FUND, L.P.;REEL/FRAME:043106/0389 Effective date: 20170726 |
|
AS | Assignment |
Owner name: AXCIENT, INC., CALIFORNIA Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:STRUCTURED ALPHA LP;REEL/FRAME:043840/0227 Effective date: 20171011 |
|
AS | Assignment |
Owner name: AXCI (AN ABC) LLC, CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:AXCIENT, INC.;REEL/FRAME:044367/0507 Effective date: 20170726 Owner name: AXCIENT HOLDINGS, LLC, CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:AXCI (AN ABC) LLC;REEL/FRAME:044368/0556 Effective date: 20170726 Owner name: EFOLDER, INC., GEORGIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:AXCIENT HOLDINGS, LLC;REEL/FRAME:044370/0412 Effective date: 20170901 |
|
AS | Assignment |
Owner name: WELLS FARGO BANK, NATIONAL ASSOCIATION, AS AGENT, CALIFORNIA Free format text: SECURITY INTEREST;ASSIGNOR:EFOLDER, INC.;REEL/FRAME:044563/0633 Effective date: 20160725 Owner name: WELLS FARGO BANK, NATIONAL ASSOCIATION, AS AGENT, Free format text: SECURITY INTEREST;ASSIGNOR:EFOLDER, INC.;REEL/FRAME:044563/0633 Effective date: 20160725 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |
|
AS | Assignment |
Owner name: MUFG UNION BANK, N.A., ARIZONA Free format text: SECURITY INTEREST;ASSIGNOR:EFOLDER, INC.;REEL/FRAME:061559/0703 Effective date: 20221027 |
|
AS | Assignment |
Owner name: EFOLDER, INC., COLORADO Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:WELLS FARGO BANK, NATIONAL ASSOCIATION;REEL/FRAME:061634/0623 Effective date: 20221027 |
|
AS | Assignment |
Owner name: EFOLDER, INC., COLORADO Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:U.S. BANK NATIONAL ASSOCIATION FORMERLY MUFG UNION BANK, N.A.;REEL/FRAME:068680/0802 Effective date: 20240919 |