CN113885800B - Object storage optimization method applied to Ceph - Google Patents
Object storage optimization method applied to Ceph Download PDFInfo
- Publication number
- CN113885800B CN113885800B CN202111155948.0A CN202111155948A CN113885800B CN 113885800 B CN113885800 B CN 113885800B CN 202111155948 A CN202111155948 A CN 202111155948A CN 113885800 B CN113885800 B CN 113885800B
- Authority
- CN
- China
- Prior art keywords
- pool
- ssd
- hdd
- osd
- cluster
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000003860 storage Methods 0.000 title claims abstract description 55
- 238000000034 method Methods 0.000 title claims abstract description 14
- 238000005457 optimization Methods 0.000 title claims abstract description 10
- 238000006073 displacement reaction Methods 0.000 claims abstract description 22
- 239000007787 solid Substances 0.000 description 4
- 238000009826 distribution Methods 0.000 description 3
- 230000008859 change Effects 0.000 description 2
- 230000008676 import Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 239000010902 straw Substances 0.000 description 2
- 239000000956 alloy Substances 0.000 description 1
- 229910045601 alloy Inorganic materials 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000004132 cross linking Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000002265 prevention Effects 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/0608—Saving storage space on storage systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0629—Configuration or reconfiguration of storage systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/067—Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention discloses an object storage optimization method applied to Ceph, which belongs to the field of object storage technology optimization and solves the problem of higher cost of improving storage performance in the traditional technology, and comprises the following steps: adding the osds of ssd devices into a cluster, and establishing a data structure for storing data in the cluster in a storage space of a system; and (B) step (B): the processor reads cluster data in the data structure from the data structure, and configures different rule for the osd of the ssd and the osd of the hdd in the cluster; step C: b, establishing an hdd_pool and a ssd_pool based on different rule in the step B, and respectively establishing a data structure and a storage space corresponding to the hdd_pool and the ssd_pool in a system storage space; step D: default displacement in rgw is configured, object data is stored in a storage space corresponding to ssd_pool, index and metadata are stored in a storage space corresponding to hdd_pool, and therefore the technical effects of improving storage performance and utilization rate of ssd equipment and reducing hardware cost are achieved.
Description
Technical Field
The invention belongs to the field of object storage optimization, and particularly relates to an object storage optimization method applied to Ceph.
Background
Ceph is a popular distributed storage system that provides object, block, and file storage functions. Its object storage module includes: the rados reliable self-organizing, automatically repairable and self-managed distributed storage rgw is a bucket (bucket) based REST gateway compatible with S3 and Swift. rgw as a client of rados, unified object access interfaces are provided to the outside by storing metadata (metadata) of objects, bucket index data (bucket index) and data respectively into the storage pools of rados. The three data stores respectively correspond to data_pool, data_extra_pool, index_pool and three pools (resource pools), and are stored in all the osds of the cluster in an associated manner through the resource pools.
Ceph does not itself have tools or commands to store objects with different data at different osds. To improve the performance of object storage, it is easily conceivable to deploy all osds using ssd hard disks, improving the performance of rgw by improving the read-write performance of the rados cluster. But the total use of ssd hard disks is costly.
Disclosure of Invention
Aiming at the problem of higher cost of improving storage performance in the prior art, the invention provides an object storage optimization method applied to Ceph, which aims at: and the storage performance and the utilization rate of ssd equipment are improved, and the hardware cost is reduced.
The technical scheme adopted by the invention is as follows:
an object storage optimization method applied to Ceph comprises the following steps:
step A: adding osd (Object-based Storage Device) of ssd (solid state disk) equipment into a cluster, and establishing a data structure for storing data in the cluster in a storage space of a system;
and (B) step (B): the processor reads cluster data in the data structure from the data structure, and configures the osd of the ssd and the osd of the hdd (hard disk drive) in the cluster into different rule (storage rule of the storage device);
step C: b, establishing an hdd_pool (resource pool of a hard disk drive) and a ssd_pool (solid state disk resource pool) based on different rule in the step B, and respectively establishing a data structure and a storage space corresponding to the hdd_pool and the ssd_pool in a system storage space;
step D: in configuration rgw (the object gateway of Ceph), default displacement (configuration), object data storage uses storage space corresponding to ssd_pool, and index (index data) and metadata are stored in storage space corresponding to hdd_pool.
By adopting the scheme, the utilization rate of the ssd equipment is improved by configuring the cruhmap (data distribution map of ceph), the zone and the region (region of ceph cluster), the original hdd equipment can be continuously used, and the hardware cost is reduced as a whole.
The specific steps of the step A are as follows:
step A1: a plurality of osds are arranged in a cluster in the data structure, wherein osds 0-osdm are set as the existing osds of the cluster, osds are created for ssd equipment, the sequence numbers of the newly created osds are osdn-osdx, and the osdn-osdx are stored in a storage medium in an array structure;
step A2: and (3) registering the key according to the osd obtained in the step A1.
The specific steps of the step B are as follows:
step B1: exporting the cruhamp, generating a cruhmapgot (file name of the stored cruhmap) file, executing a command ceph osd getcrushmap-o/root/cruhmapgot, and generating the cruhmapgot file under the root directory;
step B2: decompiling the cruchmap file and producing a decomplexing map file;
step B3: modifying the cruhmap and adding buckets;
step B4, modifying the cruhmap and adding rule;
step B5: compiling a new cruhmap to obtain a new cruhmap file newcruhmap;
step B6: newcruchmap is imported.
The specific steps of the step C are as follows:
step C1: creating pool of the hd device osd and the ssd device osd to obtain hd_pool and ssd_pool;
step C2: according to rule of step B, the rules of hdd_pool and sdd _pool in step C1 are modified so that hdd_pool uses the osds of all hdd devices and ssd_pool uses the osds of all ssd devices.
The specific steps of the step D are as follows:
step D1: exporting zone configuration of the object network manager;
step D2: adding a new default displacement, wherein the new default displacement is obtained by using the hd_pool and the ssd_pool created in the step C;
step D3: importing the displacement (configuration) in the step D2 into a cluster;
step D4: deriving a region configuration of the object gateway;
step D5: modifying region configuration, and using the newly added new_displacement in the step D2 as a default placement strategy;
step D6: importing the region configuration of the object gateway.
The specific steps of the step B3 are as follows: and C, reading out osd 0-osdm from the array structure according to osd 0-osdm obtained in the step A1, creating a socket named hdd, and creating the socket named ssd according to osdn-osdx.
In summary, due to the adoption of the technical scheme, the beneficial effects of the invention are as follows: by configuring crushmap, zone and region, the utilization rate of ssd equipment is improved, the original hdd equipment can be used continuously, and the hardware cost is reduced as a whole.
Drawings
The invention will now be described by way of example and with reference to the accompanying drawings in which:
FIG. 1 is an overall flow chart of one embodiment of the present invention.
Detailed Description
All of the features disclosed in this specification, or all of the steps in a method or process disclosed, may be combined in any combination, except for mutually exclusive features and/or steps.
The present invention is described in detail below with reference to fig. 1.
An object storage optimization method applied to Ceph comprises the following steps:
step A: adding the osds of ssd devices into a cluster, and establishing a data structure for storing data in the cluster in a storage space of a system;
and (B) step (B): the processor reads cluster data in the data structure from the data structure, and configures different rule for the osd of the ssd and the osd of the hdd in the cluster;
step C: b, establishing an hdd_pool and a ssd_pool based on different rule in the step B, and respectively establishing a data structure and a storage space corresponding to the hdd_pool and the ssd_pool in a system storage space;
step D: in the configuration rgw, default displacement is configured, and in a storage space corresponding to ssd_pool, target data is stored, and index and metadata are stored in a storage space corresponding to hdd_pool.
The method comprises the steps of using an osd (Object-based Storage Device) as an Object storage device, using ssd as a solid state disk, using hdd as a hard disk drive, using a C disk and a D disk in a computer as hard disk drives, using rule as a storage rule of the storage device, using pool as a resource pool, using hdd_pool as a resource pool of the hard disk drive, using ssd_pool as a resource pool of the solid state disk, using rgw (RADOS Gateway) as an Object gateway of Ceph, using index as index data, using metadata as metadata, and using displacement as configuration.
The specific steps of the step A are as follows:
step A1: a plurality of osds are arranged in a cluster in the data structure, wherein osds 0-osdm are set as the existing osds of the cluster, osds are created for ssd equipment, the sequence numbers of the newly created osds are osdn-osdx, and the osdn-osdx are stored in a storage medium in an array structure;
step A2: and (3) registering the key according to the osd obtained in the step A1.
The specific steps of the step B are as follows:
step B1: exporting the cruhmap, generating a cruhmap got t (file name of the cruhmap is stored), executing a command ceph osd getcrushmap-o/root/cruhmap got t, and generating the cruhmap got t file under the root directory;
step B2: decompiling the cruchmap file and producing a decomplexing map file;
step B3: modifying the cruhmap and adding buckets;
step B4, modifying the cruhmap and adding rule;
step B5: compiling a new cruhmap to obtain a new cruhmap file newcruhmap;
step B6: newcruchmap is imported.
The specific steps of the step C are as follows:
step C1: creating pool of the hd device osd and the ssd device osd to obtain hd_pool and ssd_pool;
step C2: according to rule of step B, the rules of hdd_pool and sdd _pool in step C1 are modified so that hdd_pool uses the osds of all hdd devices and ssd_pool uses the osds of all ssd devices.
The specific steps of the step D are as follows:
step D1: exporting zone configuration of the object network manager;
step D2: adding a new default displacement, wherein the new default displacement is obtained by using the hd_pool and the ssd_pool created in the step C;
step D3: importing the displacement configuration in the step D2 into a cluster;
step D4: deriving a region configuration of the object gateway;
step D5: modifying region configuration, and using the newly added new_displacement in the step D2 as a default placement strategy;
step D6: importing the region configuration of the object gateway.
The specific steps of the step B3 are as follows: and C, reading out osd 0-osdm from the array structure according to osd 0-osdm obtained in the step A1, creating a socket named hdd, and creating the socket named ssd according to osdn-osdx.
Wherein the cruhmap got is a file name storing the cruhmap, and there is no particular limitation, and the derived cruhmap is specified by using the name herein; the data distribution map is used for determining the storage position of the object in the ceph, the data distribution map can be known by a credit algorithm through the map, and the storage position of the data is found so as to directly access and write the data with the corresponding osc; the key is a key file, and in the application, the key is practical when the osd process is used for follow-up communication authentication; displacement is a prevention policy, also part of the cruhmap, defining how ceph maps pg (placement policy group) to pool (resource pool); the region is a region of the ceph cluster, actually is a logic level concept, default that one ceph cluster only has one region, the ceph cluster containing a plurality of regions must have an available master region (master region) for responding to a read-write request, self-synchronizing data inside the regions ensure consistency, the regions are conceptually similar to disaster recovery data centers in different areas, data of different data centers keep consistent, other centers can provide consistent data service after a certain center fails, and region configuration refers to configuration aiming at a certain determined region; the socket refers to a level (generally a host level) of a retainer ceph cluster with a minimum default, and if the level is a host, each osd in the socket is described as running on an independent service, wherein the host is a server; where a zone is a domain, see description of a region, a region may contain one or more zones, but a master zone must be designated to serve clients. There may be a logical plurality of different zones corresponding to different rgw, but there may be a single internal ceph storage cluster, with only one zone by default.
In this embodiment, a total of 20 osds are set in the cluster, where m has a value of 9, n has a value of 10, x has a value of 19, and the specific steps of registering key ring include that the obtained osd serial numbers 10 to 19 execute commands respectively, and taking osd10 as an example, the executed command is ceph authadd osd.10 osd 'alloy' mon 'allow profile osd' -i/var/lib/ceph/osd/ceph-10/key ring, and it should be noted that at this time, since ssd related socket and rule are not newly created yet, a cross-linking map is not added.
In the step B3, the decrushmap file in the step B2 is edited, and the following is added under root defaultbucket:
root hdd {
id -5 # do not change unnecessarily
# weight 9.000
alg straw
hash 0 # rjenkins1
item osd.0 weight 1.000
item osd.1 weight 1.000
item osd.2 weight 1.000
item osd.3 weight 1.000
item osd.4 weight 1.000
item osd.5 weight 1.000
item osd.6 weight 1.000
item osd.7 weight 1.000
item osd.8 weight 1.000
item osd.9 weight 1.000
}
root ssd {
id -6 # do not change unnecessarily
# weight 6.000
alg straw
hash 0 # rjenkins1
item osd.10 weight 1.000
item osd.11 weight 1.000
item osd.12 weight 1.000
item osd.13 weight 1.000
item osd.14 weight 1.000
item osd.15 weight 1.000
item osd.16 weight 1.000
item osd.17 weight 1.000
item osd.18 weight 1.000
item osd.19 weight 1.000
}
in the step B4, editing the decrushmap file in the step B2, and adding the following after the rule disks:
rule hdd {
ruleset 2
type replicated
min_size 1
max_size 10
step take hdd
step chooseleaf firstn 0 type hdd
step emit
}
rule ssd {
ruleset 3
type replicated
min_size 1
max_size 10
step take ssd
step chooseleaf firstn 0 type ssd
step emit
}
in step C1, when the hdd_pool is created, a command ceph osd pool create hdd _pool 512 512 is executed, wherein 512-bit pg is counted, osdnumber 100/reconstruction_count is calculated according to the number of osds, and after rounding up to the certificate power of 2, a command ceph osd pool create ssd _pool 512 512 is executed;
wherein step C2 is to modify the rule of the pool in step C1 based on the rule in step B, execute the command ceph osd pool set hdd _pool_flush_rule set 2 to enable the hdd_pool to apply the rule 2, and also is the serial number of the rule associated with the hdd device osd in step B4, execute the command ceph osd pool set ssd _pool_pool_rule set 3 to enable the ssd_pool to apply the rule 3, so far, the hdd_pool uses the osd of all the hdd devices and the ssd_pool uses the osd of all the ssd devices.
In step D1, executing the command radosgw-admin zone get-rgw-zone=default > zone.json creates a zone.json file under the current directory, including the current default placement group policy (default placement); in step D2, the hd_pool and the ssd_pool created in step C are generated, the zone.json file in step D1 is edited, and json objects are added in the displacement_pool list, where the contents are as follows:
{
"key": "new_placement",
"val": {
"index_pool": "hdd_pool",
"data_pool": "ssd_pool",
"data_extra_pool": "hdd_pool",
"index_type": 0
}
}
storing the index and the data information into the hdd_pool, and storing the data in the ssd_pool;
in step D3, a command is executed, and the command radosgw-admin zone set-rgw-zone=default-index zone. Json imports the configuration in 4.2 into the cluster; in step D4, executing a command radosgw-admin region get > region.conf.json to obtain a region configuration file region.conf.json of the current object gateway; in step D5, the new_displacement newly added in step D2 is used as a default placement policy, region_conf_json is edited, the new_displacement is added to the displacement_targets list, and the default_displacement is modified to be new_displacement, as follows:
"placement_targets": [
{ "name": "default_placement",
"tags": []},
{ "name": "ssd_placement",
"tags": []}],
"default_placement": "new_placement"
in step D6, a command radosgw-admin region set < region.conf.json is executed to import the modified region.conf.json file configuration in 4.5 to the current object gateway.
The foregoing examples merely represent specific embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the present application. It should be noted that, for those skilled in the art, several variations and modifications can be made without departing from the technical solution of the present application, which fall within the protection scope of the present application.
Claims (3)
1. An object storage optimization method applied to Ceph is characterized by comprising the following steps:
step A: adding the osds of ssd devices into a cluster, and establishing a data structure for storing data in the cluster in a storage space of a system;
and (B) step (B): the processor reads cluster data in the data structure from the data structure, and configures different rule for the osd of the ssd and the osd of the hdd in the cluster;
step C: b, establishing an hdd_pool and a ssd_pool based on different rule in the step B, and respectively establishing a data structure and a storage space corresponding to the hdd_pool and the ssd_pool in a system storage space;
step D: configuring default displacement in rgw, storing object data in a storage space corresponding to ssd_pool, and storing index and metadata in a storage space corresponding to hdd_pool;
the specific steps of the step D are as follows:
step D1: exporting zone configuration of the object network manager;
step D2: adding a new default displacement, wherein the new default displacement is obtained by using the hd_pool and the ssd_pool created in the step C;
step D3: importing the displacement configuration in the step D2 into a cluster;
step D4: deriving a region configuration of the object gateway;
step D5: modifying region configuration, and using the newly added new_displacement in the step D2 as a default placement strategy;
step D6: importing region configuration of the object gateway;
the specific steps of the step B are as follows:
step B1: exporting the cruhmap, generating a cruhmap got t file, executing a command ceph osd getcrushmap-o/root/cruhmap got t, and generating the cruhmap got t file under the root directory;
step B2: decompiling the cruchmap file and producing a decomplexing map file;
step B3: modifying the cruhmap and adding buckets;
step B4, modifying the cruhmap and adding rule;
step B5: compiling a new cruhmap to obtain a new cruhmap file newcruhmap;
step B6: introducing newcruhmap;
the specific steps of the step C are as follows:
step C1: creating pool of the hd device osd and the ssd device osd to obtain hd_pool and ssd_pool;
step C2: according to rule of step B, the rules of hdd_pool and sdd _pool in step C1 are modified so that hdd_pool uses the osds of all hdd devices and ssd_pool uses the osds of all ssd devices.
2. The method for optimizing object storage applied to Ceph according to claim 1, wherein the specific steps of step a are as follows:
step A1: a plurality of osds are arranged in a cluster in the data structure, wherein osds 0-osdm are set as the existing osds of the cluster, osds are created for ssd equipment, the sequence numbers of the newly created osds are osdn-osdx, and the osdn-osdx are stored in a storage medium in an array structure;
step A2: and (3) registering the key according to the osd obtained in the step A1.
3. The method for optimizing object storage applied to Ceph according to claim 1, wherein the specific steps of step B3 are as follows: and C, reading out osd 0-osdm from the data structure according to osd 0-osdm obtained in the step A1, creating a socket named hdd, and creating the socket named ssd according to osdn-osdx.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111155948.0A CN113885800B (en) | 2021-09-30 | 2021-09-30 | Object storage optimization method applied to Ceph |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111155948.0A CN113885800B (en) | 2021-09-30 | 2021-09-30 | Object storage optimization method applied to Ceph |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113885800A CN113885800A (en) | 2022-01-04 |
CN113885800B true CN113885800B (en) | 2024-01-09 |
Family
ID=79004479
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111155948.0A Active CN113885800B (en) | 2021-09-30 | 2021-09-30 | Object storage optimization method applied to Ceph |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113885800B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114385090B (en) * | 2022-03-23 | 2022-06-07 | 深圳市杉岩数据技术有限公司 | Data automatic processing method and device based on object storage site synchronization mechanism |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9240985B1 (en) * | 2012-08-16 | 2016-01-19 | Netapp, Inc. | Method and system for managing access to storage space in storage systems |
CN109284258A (en) * | 2018-08-13 | 2019-01-29 | 华东计算技术研究所(中国电子科技集团公司第三十二研究所) | Distributed multi-level storage system and method based on HDFS |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9298397B2 (en) * | 2013-11-12 | 2016-03-29 | Globalfoundries Inc. | Nonvolatile storage thresholding for ultra-SSD, SSD, and HDD drive intermix |
US10339016B2 (en) * | 2017-08-10 | 2019-07-02 | Rubrik, Inc. | Chunk allocation |
-
2021
- 2021-09-30 CN CN202111155948.0A patent/CN113885800B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9240985B1 (en) * | 2012-08-16 | 2016-01-19 | Netapp, Inc. | Method and system for managing access to storage space in storage systems |
CN109284258A (en) * | 2018-08-13 | 2019-01-29 | 华东计算技术研究所(中国电子科技集团公司第三十二研究所) | Distributed multi-level storage system and method based on HDFS |
Non-Patent Citations (1)
Title |
---|
陈阳 等."Ceph RadosGW对象存储集群的部署与优化".《现代计算机》.2020,第第14卷卷(第第5期期),第17-20页. * |
Also Published As
Publication number | Publication date |
---|---|
CN113885800A (en) | 2022-01-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10496627B2 (en) | Consistent ring namespaces facilitating data storage and organization in network infrastructures | |
US9811684B1 (en) | Token-based storage service | |
US10356150B1 (en) | Automated repartitioning of streaming data | |
US11294601B2 (en) | Method of distributed data redundancy storage using consistent hashing | |
US8090792B2 (en) | Method and system for a self managing and scalable grid storage | |
CN104516678B (en) | Method and apparatus for data storage | |
EP3617867A1 (en) | Fragment management method and fragment management apparatus | |
CN104283959B (en) | A kind of memory mechanism based on grading performance suitable for cloud platform | |
CN103019953A (en) | Construction system and construction method for metadata | |
US11621891B1 (en) | Systems and methods for routing network data based on social connections of users | |
CN108733507A (en) | The method and apparatus of file backup and recovery | |
US12032550B2 (en) | Multi-tenant partitioning in a time-series database | |
CN105357322B (en) | A virtual machine allocation method based on topology division | |
CN103488685B (en) | Fragmented-file storage method based on distributed storage system | |
CN102982182B (en) | Data storage planning method and device | |
CN105335168A (en) | System, method and apparatus for remotely configuring operating system | |
CN113885800B (en) | Object storage optimization method applied to Ceph | |
CN104272242A (en) | Creating encrypted storage volumes | |
CN106991190A (en) | A kind of database automatically creates subdata base system | |
CN107169056A (en) | Distributed file system and the method for saving distributed file system memory space | |
CN111353172A (en) | Hadoop cluster big data access method and system based on block chain | |
US8443369B1 (en) | Method and system for dynamically selecting a best resource from each resource collection based on resources dependencies, prior selections and statistics to implement an allocation policy | |
US20170115917A1 (en) | Object storage power consumption optimization | |
US11226838B2 (en) | Container-based management method by changing intelligent container component execution priority using remote calls via remote access unit and remote network functon module | |
CN105354319A (en) | Database connection pool management method and system for SN-structured MPP database cluster |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |