[go: up one dir, main page]

CN115113817A - Network card-based storage optimization method and system, electronic device and storage medium - Google Patents

Network card-based storage optimization method and system, electronic device and storage medium Download PDF

Info

Publication number
CN115113817A
CN115113817A CN202210730635.1A CN202210730635A CN115113817A CN 115113817 A CN115113817 A CN 115113817A CN 202210730635 A CN202210730635 A CN 202210730635A CN 115113817 A CN115113817 A CN 115113817A
Authority
CN
China
Prior art keywords
network card
cpu core
resource pool
resources
utilization rate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210730635.1A
Other languages
Chinese (zh)
Other versions
CN115113817B (en
Inventor
贾猛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Inspur Intelligent Technology Co Ltd
Original Assignee
Suzhou Inspur Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Inspur Intelligent Technology Co Ltd filed Critical Suzhou Inspur Intelligent Technology Co Ltd
Priority to CN202210730635.1A priority Critical patent/CN115113817B/en
Publication of CN115113817A publication Critical patent/CN115113817A/en
Application granted granted Critical
Publication of CN115113817B publication Critical patent/CN115113817B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0629Configuration or reconfiguration of storage systems
    • G06F3/0631Configuration or reconfiguration of storage systems by allocating resources to storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0683Plurality of storage devices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Software Systems (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The application provides a storage optimization method, a system, an electronic device and a storage medium based on a network card, comprising the following steps: determining the distribution range of the network card CPU core resource according to the physical memory position corresponding to the network card; establishing a network card resource pool according to the network card CPU core resources in the distribution range; adjusting the number of the network card CPU core resources in the network card resource pool and updating the network card resource pool according to the utilization rate of the network card CPU core resources and a preset rule; and finishing storage optimization of the server by utilizing the network card resource pool. Based on real-time monitoring, the CPU core resource number of the network card resource pool is dynamically adjusted, so that the network card resource pool can be suitable for storage servers with different configurations; in addition, the network card is guaranteed to always keep the highest performance to work, and resource waste caused by the fact that too many CPU core resources are distributed to the network card is avoided; furthermore, the prior resources are reasonably utilized, better hardware does not need to be upgraded, and the read-write IOPS performance and other performances of the storage server can be effectively and greatly improved.

Description

Network card-based storage optimization method and system, electronic device and storage medium
Technical Field
The present invention relates to the field of storage technologies, and in particular, to a storage optimization method and system based on a network card, an electronic device, and a storage medium.
Background
Technological flooding has pushed the full opening of the flash memory storage era, and the large-scale popularization of flash memory arrays has become overwhelming. With the falling of technologies such as artificial intelligence, big data, cloud computing, 5G, Internet of things and the like, the explosion of mass data and unprecedented demands for extreme performance are brought forward. In the past, the full flash storage application scenario mainly focuses on a core transaction system with low response delay requirement and high IOPS (Input/Output Operations Per Second) requirement, such as online transaction in a traditional industry financial system, and the like, and the performance of the storage system is usually required to be as high as 8000 IOPS/TB. With the increasing demand of the full flash storage, how to improve the performance of the full flash server and exert the maximum capability of the full flash server becomes a great challenge to various large storage manufacturers.
The existing storage optimization scheme is generally set according to a computer system, and resources are statically allocated to the computer to manage storage; then, the storage optimization scheme ignores the characteristics and dynamic memory requirements of different workloads, and the existing optimization strategies do not consider acquiring corresponding task scheduling information from the system architecture, which results in the loss of some optimization opportunities. For some storage optimization schemes by binding CPU cores, the network card is simply bound to a plurality of fixed CPU cores; the optimization cost is high, the existing resources are not reasonably utilized, certain waste is caused, and the performance bottleneck of the network card cannot be avoided.
Therefore, a storage optimization method capable of guaranteeing network card performance and reasonable resource allocation is needed to solve the above technical problems in the prior art.
Disclosure of Invention
In order to solve the defects of the prior art, the present invention provides a storage optimization method, system, electronic device and storage medium based on a network card, so as to solve the above technical problems of the prior art.
In order to achieve the above object, the first aspect of the present invention provides a storage optimization method based on a network card, where the method includes:
determining the distribution range of the network card CPU core resource according to the physical memory position corresponding to the network card;
establishing a network card resource pool according to the network card CPU core resources in the distribution range;
adjusting the number of the network card CPU core resources in the network card resource pool and updating the network card resource pool according to the utilization rate of the network card CPU core resources and a preset rule;
and finishing storage optimization of the server by utilizing the network card resource pool.
In some embodiments, the adjusting the number of the network card CPU core resources in the network card resource pool and updating the network card resource pool according to the usage rate of the network card CPU core resources and a preset rule includes:
comparing the utilization rate of each network card CPU core resource with an idle threshold value;
if the utilization rate of any network card CPU core resource is smaller than the idle threshold, the number of the network card CPU core resources in the network card resource pool is decreased progressively according to the step length until the utilization rate of each network card CPU core resource is larger than the idle threshold.
In some embodiments, the adjusting the number of the network card CPU core resources in the network card resource pool and updating the network card resource pool according to the usage rate of the network card CPU core resources and a preset rule further includes:
comparing the utilization rate of each network card CPU core resource with a busy threshold value;
and if the utilization rate of any network card CPU core resource is greater than the busy threshold, increasing the number of the network card CPU core resources in the network card resource pool according to the step length until the utilization rate of each network card CPU core resource is less than the busy threshold and the number of the network card CPU core resources is less than or equal to the maximum value of the number of the network card CPU core resources in the distribution range.
In some embodiments, the method further comprises:
and if the utilization rate of each network card CPU core resource in the network card resource pool is greater than the busy threshold value and the number of the distributed network card CPU core resources reaches the maximum value, generating alarm information to prompt a user that the network card resource pool has insufficient resources.
In some embodiments, the establishing a network card resource pool according to the network card CPU core resources within the allocation range further includes:
and selecting a preset number of the network card CPU core resources from the range of the network card CPU core resources to construct the network card resource pool when the network card resource pool is started.
In some embodiments, the method further comprises:
detecting the fluctuation proportion of the CPU utilization rate;
and determining the step length according to the fluctuation ratio of the CPU utilization rate and a preset condition.
In some embodiments, the method further comprises:
screening the processes in the network card resource pool;
excluding the network card resource pool from the screened non-network card processes in the network card resource pool;
and the non-network card process is taken over by CPU core resources outside the network card resource pool.
In a second aspect, the present application provides a network card-based storage optimization system, including:
the core allocation module is used for determining the allocation range of the CPU core resource of the network card according to the physical memory position corresponding to the network card;
the data preparation module is used for establishing a network card resource pool according to the network card CPU core resources in the distribution range;
the core adjusting module is used for adjusting the number of the network card CPU core resources in the network card resource pool and updating the network card resource pool according to the utilization rate of the network card CPU core resources and a preset rule;
and the resource processing module is used for finishing storage optimization of the server by utilizing the network card resource pool.
In a third aspect, the present application provides an electronic device, comprising:
one or more processors;
and memory associated with the one or more processors for storing program instructions that, when read and executed by the one or more processors, perform operations comprising:
determining the distribution range of the network card CPU core resource according to the physical memory position corresponding to the network card;
establishing a network card resource pool according to the network card CPU core resource in the distribution range;
adjusting the number of the network card CPU core resources in the network card resource pool and updating the network card resource pool according to the utilization rate of the network card CPU core resources and a preset rule;
and finishing storage optimization of the server by utilizing the network card resource pool.
In a fourth aspect, the present application further provides a computer-readable storage medium having a computer program stored thereon, the computer program causing a computer to perform the following operations:
determining the distribution range of the network card CPU core resource according to the physical memory position corresponding to the network card;
establishing a network card resource pool according to the network card CPU core resource in the distribution range;
adjusting the number of the network card CPU core resources in the network card resource pool and updating the network card resource pool according to the utilization rate of the network card CPU core resources and a preset rule;
and finishing storage optimization of the server by utilizing the network card resource pool.
The beneficial effect that this application realized does:
the application provides a storage optimization method based on a network card, which comprises the steps of determining the distribution range of the CPU core resource of the network card according to the physical memory position corresponding to the network card; establishing a network card resource pool according to the network card CPU core resource in the distribution range; adjusting the number of the network card CPU core resources in the network card resource pool and updating the network card resource pool according to the utilization rate of the network card CPU core resources and a preset rule; and finishing storage optimization of the server by utilizing the network card resource pool. The CPU core resource number of the network card resource pool is dynamically adjusted, so that the network card resource pool can be suitable for storage servers with different configurations, and the resource utilization of the storage servers is maximized; in addition, the network card is guaranteed to always keep the highest performance to work, and meanwhile, resource waste caused by the fact that too much CPU core resources are distributed to the network card is avoided; furthermore, by reasonably utilizing the existing resources, better hardware does not need to be upgraded, the read-write IOPS performance and other performances of the storage server can be effectively and greatly improved, the cost is saved, and the maximum value of the existing resources is exerted.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings can be obtained by those skilled in the art without inventive efforts, wherein:
fig. 1 is a schematic diagram of a network card resource pool provided in an embodiment of the present application;
fig. 2 is a flowchart of a storage optimization method based on a network card according to an embodiment of the present application;
FIG. 3 is a diagram of a network card based storage optimization system architecture provided by an embodiment of the present application;
fig. 4 is a structural diagram of an electronic device provided in an embodiment of the present application.
Detailed Description
In order to make the purpose, technical solutions and advantages of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be understood that throughout the description and claims of this application, unless the context clearly requires otherwise, the words "comprise", "comprising", and the like, are to be construed in an inclusive sense as opposed to an exclusive or exhaustive sense; that is, what is meant is "including, but not limited to".
It will be further understood that the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. In addition, in the description of the present application, "a plurality" means two or more unless otherwise specified.
It should be noted that the terms "S1", "S2", etc. are used for descriptive purposes only, are not intended to be used in a specific sense to refer to an order or sequence, and are not intended to limit the present application, but are merely used for convenience in describing the methods of the present application and are not to be construed as indicating the order of the steps. In addition, technical solutions between various embodiments may be combined with each other, but must be realized by a person skilled in the art, and when the technical solutions are contradictory or cannot be realized, such a combination should not be considered to exist, and is not within the protection scope of the present application.
As known in the art, conventional storage optimization schemes typically allocate resources statically for a computer, thereby ignoring some of the dynamic memory requirements of the computer and the different loads that the computer generates when performing different task processes. Even the subsequently proposed storage optimization scheme related to binding the CPU cores is only to simply bind the network card to the fixed CPU cores, and cannot reasonably and dynamically allocate resources, so that certain resource waste is caused and the performance of the network card cannot be considered.
In order to solve the technical problem, the application provides a storage optimization method based on a network card, so as to ensure the highest performance work of the network card in the current storage server and avoid resource waste caused by too many CPU cores being allocated to the network card or influence normal acquisition of CPU resources by other services.
Example one
As shown in fig. 1, an embodiment of the present application provides a network card resource pool, which includes a pressure monitoring module, a CPU core allocation module, and a non-network card process monitoring and taking-over module; specifically, the process of implementing optimized allocation of server resources by using the network card resource pool disclosed in this embodiment includes:
and S1, establishing a network card resource pool.
Specifically, a physical memory location corresponding to the network card, that is, a Non Uniform memory access Architecture (NUMA) location is first obtained; and determining the CPU core resources (namely the network card CPU core resources) which can be used for constructing the network card resource pool by referring to the CPU core resource distribution diagram of the NUMA. Secondly, when the network card resource pool is initially constructed, CPU core resources with preset quantity are selected from the range of the CPU core resources which can be used for constructing the network card resource pool to construct the network card resource pool in the initial state. For example, the physical memory location corresponding to the network card is 6, that is, NUMA is 6, and at this time, according to the relationship between NUMA and CPU core resources, the CPU core resource range for constructing the network card resource pool corresponding to the network card may be determined to be CPU cores 48 to 55 and CPU core 112-; preferably, when the network card resource pool is initially constructed, 8 CPU cores may be selected from the ranges of the CPU cores 48 to 55 and the CPU cores 112 and 119 to construct the network card resource pool. It should be noted that the selection of 8 CPU cores is only one selection condition, and the application does not limit the preset number of the selected CPU core resources.
After the network card resource pool is initially created, a system service unit (system unit) can be created and three modules are designed: the system comprises a pressure monitoring module, a CPU core distribution module and a non-network card process monitoring takeover module; wherein, the three modules can be named through a command dynamic _ pool _ monitor.service, a command dynamic _ pool _ allocation.service and a command dynamic _ pool _ non _ monitor.service. The pressure monitoring module may further define a trigger switch for selecting whether to start the dynamic resource pool according to a requirement.
S2, monitoring the utilization rate of each network card CPU core resource in the network card resource pool, and dynamically adjusting the number of the network card CPU core resources in the network card resource pool according to a preset rule.
Specifically, the network card resource pool reads the utilization rate of the network card CPU core resources after the pressure monitoring module performs interval preset time period and compares the utilization rate with a preset busy threshold and an idle threshold, and the CPU core allocation module dynamically adjusts the number of the CPU core resources in the network card according to the utilization rate of the network card CPU core resources read by the pressure monitoring module. The preset time period may be 1 second or 2 seconds, which is not limited in the present application; preferably, the busy threshold may be set to 95%, the idle threshold may be set to 50%, and the idle threshold may be modified according to actual requirements when applied to a specific scene, which is not limited in the present application.
If the utilization rate of any network card CPU core resource in the network card resource pool is smaller than the idle threshold, in order to reasonably distribute the CPU core resource and avoid resource waste, the network card CPU core resource in the network card resource pool needs to be reduced. Specifically, the network card CPU core resources in the network card resource pool may be sequentially reduced according to the set step length until the usage rate of each network card CPU core resource is greater than the idle threshold and less than the busy threshold.
If the utilization rate of any network card CPU core resource in the network card resource pool exceeds a busy threshold, in order to ensure that the network card can work with high performance, the network card CPU core resource in the network card resource pool needs to be increased at this time. Specifically, the network card CPU core resources in the network card resource pool may be sequentially incremented according to the set step length until the usage rate of each network card CPU core resource is less than the busy threshold and the incremented network card CPU core cannot exceed the maximum number of CPU core resources that can be used to construct the network card resource pool, specified in the CPU core resource distribution diagram of NUMA. If the number of the core resources of the network card CPU distributed in the network card resource pool and the maximum value are reached but the utilization rate of the core resources of the network card CPU exceeds a busy threshold value, warning information is generated at the moment to prompt a user that the storage performance of the network card reaches the limit, and the user can add other services or equipment capable of being stored.
Wherein, the setting of the step size can be determined according to the fluctuation ratio of the CPU utilization rate, such as: the fluctuation of the CPU utilization rate is within 10 percent, and the step length can be selected from 1 to 2; the fluctuation of the CPU utilization rate is 10-30%, and the step length can be selected from 3-4; the fluctuation of the CPU utilization rate is more than 30%, and the step length can be selected from 5-6. Namely, if the pressure fluctuation of the CPU is large, the number of CPU core resources in the network card resource pool can be adjusted more quickly by increasing the step length, so that the most reasonable resource use state can be reached more quickly.
In addition, the application can also identify the non-network card process in the network card resource pool through the non-network card process monitoring and taking over module (whether the non-network card process is the network card process can be identified through a command pwdx xxxx); when the non-network card process number is identified, the corresponding non-network card process is put out of the network card resource pool, and other CPU core resources outside the network card take over so as to ensure that the network card can work at high performance.
The method is different from the traditional storage optimization method, a dynamic network card resource pool is set in consideration of the particularity of resources occupied by the network card of the storage server, and the number of CPU core resources in the network card resource pool is dynamically adjusted through the set step length, the busy threshold and the idle threshold so as to be suitable for the storage servers with various different configurations, so that the resource utilization of the storage server is maximized, and the read-write IPOS performance and other performances of the storage server are greatly improved. Meanwhile, the whole adjusting process does not need manual intervention, and automation for ensuring efficient operation of the storage server is realized.
Example two
Corresponding to the first embodiment, the embodiment of the present application further provides a storage optimization method based on a network card, as shown in fig. 2, specifically as follows:
2100. determining the distribution range of the network card CPU core resource according to the physical memory position corresponding to the network card;
2200. establishing a network card resource pool according to the network card CPU core resource in the distribution range;
preferably, the establishing a network card resource pool according to the network card CPU core resource within the allocation range further includes:
2210, when the network card resource pool is started, selecting a preset number of the network card CPU core resources from the range of the network card CPU core resources to construct the network card resource pool.
2300. Adjusting the number of the network card CPU core resources in the network card resource pool and updating the network card resource pool according to the utilization rate of the network card CPU core resources and a preset rule;
preferably, the adjusting the number of the network card CPU core resources in the network card resource pool and updating the network card resource pool according to the usage rate of the network card CPU core resources and a preset rule includes:
2310. comparing the utilization rate of each network card CPU core resource with an idle threshold value;
2320. if the utilization rate of any network card CPU core resource is smaller than the idle threshold, the number of the network card CPU core resources in the network card resource pool is decreased progressively according to the step length until the utilization rate of each network card CPU core resource is larger than the idle threshold.
Preferably, the adjusting the number of the network card CPU core resources in the network card resource pool and updating the network card resource pool according to the utilization rate of the network card CPU core resources and a preset rule further includes:
2330. comparing the utilization rate of each network card CPU core resource with a busy threshold value;
2340. and if the utilization rate of any network card CPU core resource is greater than the busy threshold, increasing the number of the network card CPU core resources in the network card resource pool according to the step length until the utilization rate of each network card CPU core resource is less than the busy threshold and the number of the network card CPU core resources is less than or equal to the maximum value of the number of the network card CPU core resources in the distribution range.
Preferably, the method further comprises:
2350. and if the utilization rate of each network card CPU core resource in the network card resource pool is greater than the busy threshold value and the number of the distributed network card CPU core resources reaches the maximum value, generating alarm information to prompt a user that the network card resource pool resources are insufficient.
Preferably, the method further comprises:
2360. detecting the fluctuation proportion of the CPU utilization rate;
2370. and determining the step length according to the fluctuation ratio of the CPU utilization rate and a preset condition.
2400. And finishing storage optimization of the server by utilizing the network card resource pool.
Preferably, the method further comprises:
2410. screening the processes in the network card resource pool;
2420. excluding the network card resource pool from the screened non-network card processes in the network card resource pool;
2430. and the non-network card process is taken over by CPU core resources outside the network card resource pool.
EXAMPLE III
As shown in fig. 3, corresponding to the first embodiment and the second embodiment, an embodiment of the present application provides a storage optimization system based on a network card, where the system includes:
a core allocation module 310, configured to determine an allocation range of a network card CPU core resource according to a physical memory location corresponding to the network card;
the data preparation module 320 is used for establishing a network card resource pool according to the network card CPU core resources in the distribution range;
the core adjusting module 330 is configured to adjust the number of network card CPU core resources in the network card resource pool and update the network card resource pool according to the utilization rate of the network card CPU core resources and a preset rule;
and the resource processing module 340 is configured to complete storage optimization of the server by using the network card resource pool.
In some embodiments, the core adjustment module 330 is further configured to compare the utilization rate of each of the network card CPU core resources with an idle threshold; the core adjusting module 330 is further configured to, when the usage rate of any network card CPU core resource is smaller than the idle threshold, decrement the number of network card CPU core resources in the network card resource pool according to the step length until the usage rate of each network card CPU core resource is greater than the idle threshold.
In some embodiments, the core adjustment module 330 is further configured to compare the utilization of each of the network card CPU core resources with a busy threshold; the core adjusting module 330 is further configured to, when the utilization rate of any network card CPU core resource is greater than the busy threshold, increment the number of network card CPU core resources in the network card resource pool according to the step length until the utilization rate of each network card CPU core resource is less than the busy threshold and the number of the network card CPU core resources is less than or equal to the maximum value of the number of the network card CPU core resources in the allocation range.
In some embodiments, the core adjustment module 330 is further configured to generate an alarm message to prompt a user that the network card resource pool resources are in shortage when the usage rate of each network card CPU core resource in the network card resource pool is greater than the busy threshold and the number of the allocated network card CPU core resources reaches a maximum value.
In some embodiments, the data preparation module 320 is further configured to select a preset number of the network card CPU core resources from the range of the network card CPU core resources to construct the network card resource pool when the network card resource pool is enabled.
In some embodiments, the core adjustment module 330 is further configured to detect a CPU utilization fluctuation ratio; the core adjustment module 330 is further configured to determine the step length according to the CPU utilization fluctuation ratio and a preset condition.
In some embodiments, the storage optimization system further includes a process screening module 350 (not shown in the figure), where the process screening module 350 is configured to screen processes in the network card resource pool; the process screening module 350 is further configured to exclude the non-network card processes in the network card resource pool from the network card resource pool; and the non-network card process is taken over by CPU core resources outside the network card resource pool.
Example four
Corresponding to all the above embodiments, an embodiment of the present application provides an electronic device, including:
one or more processors; and memory associated with the one or more processors for storing program instructions that, when read and executed by the one or more processors, perform operations comprising:
determining the distribution range of the network card CPU core resource according to the physical memory position corresponding to the network card;
establishing a network card resource pool according to the network card CPU core resources in the distribution range;
adjusting the number of the network card CPU core resources in the network card resource pool and updating the network card resource pool according to the utilization rate of the network card CPU core resources and a preset rule;
and finishing storage optimization of the server by utilizing the network card resource pool.
In some implementation scenarios, the following operations are also performed:
comparing the utilization rate of each network card CPU core resource with an idle threshold value;
if the utilization rate of any network card CPU core resource is smaller than the idle threshold, the number of the network card CPU core resources in the network card resource pool is decreased progressively according to the step length until the utilization rate of each network card CPU core resource is larger than the idle threshold.
In some implementation scenarios, the following operations are also performed:
comparing the utilization rate of each network card CPU core resource with a busy threshold value;
and if the utilization rate of any network card CPU core resource is greater than the busy threshold, increasing the number of the network card CPU core resources in the network card resource pool according to the step length until the utilization rate of each network card CPU core resource is less than the busy threshold and the number of the network card CPU core resources is less than or equal to the maximum value of the number of the network card CPU core resources in the distribution range.
In some implementation scenarios, the following operations are also performed:
and if the utilization rate of each network card CPU core resource in the network card resource pool is greater than the busy threshold value and the number of the distributed network card CPU core resources reaches the maximum value, generating alarm information to prompt a user that the network card resource pool has insufficient resources.
In some implementation scenarios, the following operations are also performed:
and selecting a preset number of the network card CPU core resources from the range of the network card CPU core resources to construct the network card resource pool when the network card resource pool is started.
In some implementation scenarios, the following operations are also performed:
detecting the fluctuation proportion of the CPU utilization rate;
and determining the step length according to the fluctuation ratio of the CPU utilization rate and a preset condition.
In some implementation scenarios, the following operations are also performed:
screening the processes in the network card resource pool;
excluding the network card resource pool from the screened non-network card processes in the network card resource pool;
and the non-network card process is taken over by CPU core resources outside the network card resource pool.
Fig. 4 schematically shows an architecture of an electronic device, which may specifically include a processor 410, a video display adapter 411, a disk drive 412, an input/output interface 413, a network interface 414, and a memory 420. The processor 410, the video display adapter 411, the disk drive 412, the input/output interface 413, the network interface 414, and the memory 420 may be communicatively connected by a bus 430.
The processor 410 may be implemented by a general-purpose CPU (Central Processing Unit), a microprocessor, an Application Specific Integrated Circuit (ASIC), or one or more Integrated circuits, and is configured to execute related programs to implement the technical solution provided by the present Application.
The Memory 420 may be implemented in the form of a ROM (Read Only Memory), a RAM (Random Access Memory), a static storage device, a dynamic storage device, or the like. The memory 420 may store an operating system 421 for controlling execution of the electronic device 400, a Basic Input Output System (BIOS)422 for controlling low-level operation of the electronic device 400. In addition, a web browser 423, a data storage management system 424, and an icon font processing system 425, and the like, may also be stored. The icon font processing system 425 may be an application program that implements the operations of the foregoing steps in this embodiment of the application. In summary, when the technical solution provided in the present application is implemented by software or firmware, the relevant program code is stored in the memory 420 and called to be executed by the processor 410.
The input/output interface 413 is used for connecting an input/output module to input and output information. The i/o module may be configured as a component in a device (not shown) or may be external to the device to provide a corresponding function. Wherein the input devices may include a keyboard, mouse, touch screen, microphone, various sensors, etc., and the output devices may include a display, speaker, vibrator, indicator light, etc.
The network interface 414 is used to connect a communication module (not shown in the figure) to implement communication interaction between the present device and other devices. The communication module can realize communication in a wired mode (such as USB, network cable and the like) and also can realize communication in a wireless mode (such as mobile network, WIFI, Bluetooth and the like).
Bus 430 includes a path that transfers information between the various components of the device, such as processor 410, video display adapter 411, disk drive 412, input/output interface 413, network interface 414, and memory 420.
In addition, the electronic device 400 may also obtain information of specific pickup conditions from a virtual resource object pickup condition information database for performing condition judgment, and the like.
It should be noted that although the above-mentioned devices only show the processor 410, the video display adapter 411, the disk drive 412, the input/output interface 413, the network interface 414, the memory 420, the bus 430 and so on, in a specific implementation, the device may also include other components necessary for normal execution. Furthermore, it will be understood by those skilled in the art that the apparatus described above may also include only the components necessary to implement the solution of the present application, and not necessarily all of the components shown in the figures.
EXAMPLE six
In response to all the above embodiments, an embodiment of the present application further provides a computer-readable storage medium, which stores a computer program, where the computer program causes a computer to operate as follows:
determining the distribution range of the network card CPU core resource according to the physical memory position corresponding to the network card;
establishing a network card resource pool according to the network card CPU core resources in the distribution range;
adjusting the number of the network card CPU core resources in the network card resource pool and updating the network card resource pool according to the utilization rate of the network card CPU core resources and a preset rule;
and finishing storage optimization of the server by utilizing the network card resource pool.
From the above description of the embodiments, it is clear to those skilled in the art that the present application can be implemented by software plus necessary general hardware platform. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which may be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, or the like, and includes several instructions for enabling a computer device (which may be a personal computer, a cloud server, or a network device) to execute the method according to the embodiments or some parts of the embodiments of the present application.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, the system or system embodiments are substantially similar to the method embodiments and therefore are described in a relatively simple manner, and reference may be made to some of the descriptions of the method embodiments for related points. The above-described system and system embodiments are only illustrative, wherein the units described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (10)

1. A storage optimization method based on a network card is characterized by comprising the following steps:
determining the distribution range of the CPU core resource of the network card according to the physical memory position corresponding to the network card;
establishing a network card resource pool according to the network card CPU core resources in the distribution range;
adjusting the number of the network card CPU core resources in the network card resource pool and updating the network card resource pool according to the utilization rate of the network card CPU core resources and a preset rule;
and finishing storage optimization of the server by utilizing the network card resource pool.
2. The method according to claim 1, wherein the adjusting the number of the network card CPU core resources in the network card resource pool and updating the network card resource pool according to the utilization rate of the network card CPU core resources and a preset rule comprises:
comparing the utilization rate of each network card CPU core resource with an idle threshold value;
if the utilization rate of any network card CPU core resource is smaller than the idle threshold, the number of the network card CPU core resources in the network card resource pool is decreased progressively according to the step length until the utilization rate of each network card CPU core resource is larger than the idle threshold.
3. The method according to claim 2, wherein the adjusting the number of the network card CPU core resources in the network card resource pool and updating the network card resource pool according to the utilization rate of the network card CPU core resources and a preset rule further comprises:
comparing the utilization rate of each network card CPU core resource with a busy threshold value;
and if the utilization rate of any network card CPU core resource is greater than the busy threshold, increasing the number of the network card CPU core resources in the network card resource pool according to the step length until the utilization rate of each network card CPU core resource is less than the busy threshold and the number of the network card CPU core resources is less than or equal to the maximum value of the number of the network card CPU core resources in the distribution range.
4. The method of any of claim 3, further comprising:
and if the utilization rate of each network card CPU core resource in the network card resource pool is greater than the busy threshold value and the number of the distributed network card CPU core resources reaches the maximum value, generating alarm information to prompt a user that the network card resource pool resources are insufficient.
5. The method of claim 1, wherein the establishing a network card resource pool according to the network card CPU core resources within the allocation range further comprises:
and selecting a preset number of the network card CPU core resources from the range of the network card CPU core resources to construct the network card resource pool when the network card resource pool is started.
6. The method of claim 2, further comprising:
detecting the fluctuation proportion of the CPU utilization rate;
and determining the step length according to the fluctuation ratio of the CPU utilization rate and a preset condition.
7. The method of claim 1, further comprising:
screening the processes in the network card resource pool;
excluding the network card resource pool from the screened non-network card processes in the network card resource pool;
and the non-network card process is taken over by CPU core resources outside the network card resource pool.
8. A network card based storage optimization system, the system comprising:
the core allocation module is used for determining the allocation range of the network card CPU core resources according to the physical memory position corresponding to the network card;
the data preparation module is used for establishing a network card resource pool according to the network card CPU core resources in the distribution range;
the core adjusting module is used for adjusting the number of the network card CPU core resources in the network card resource pool and updating the network card resource pool according to the utilization rate of the network card CPU core resources and a preset rule;
and the resource processing module is used for finishing storage optimization of the server by utilizing the network card resource pool.
9. An electronic device, characterized in that the electronic device comprises:
one or more processors;
and memory associated with the one or more processors for storing program instructions which, when read and executed by the one or more processors, perform the method of any of claims 1-7.
10. A computer-readable storage medium, characterized in that it stores a computer program which causes a computer to execute the method of any one of claims 1-7.
CN202210730635.1A 2022-06-24 2022-06-24 Storage optimization method and system based on network card, electronic equipment and storage medium Active CN115113817B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210730635.1A CN115113817B (en) 2022-06-24 2022-06-24 Storage optimization method and system based on network card, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210730635.1A CN115113817B (en) 2022-06-24 2022-06-24 Storage optimization method and system based on network card, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN115113817A true CN115113817A (en) 2022-09-27
CN115113817B CN115113817B (en) 2024-10-22

Family

ID=83330210

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210730635.1A Active CN115113817B (en) 2022-06-24 2022-06-24 Storage optimization method and system based on network card, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115113817B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117880107A (en) * 2023-12-13 2024-04-12 天翼云科技有限公司 A method for adaptively adjusting parameters of hot-expanding virtual machine network card equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017142838A (en) * 2017-04-06 2017-08-17 華為技術有限公司Huawei Technologies Co.,Ltd. Resource allocation method of central processing unit and calculation node
CN110995616A (en) * 2019-12-06 2020-04-10 苏州浪潮智能科技有限公司 Management method and device for large-flow server and readable medium
CN112003797A (en) * 2020-07-16 2020-11-27 苏州浪潮智能科技有限公司 Method, system, terminal and storage medium for improving performance of virtualized DPDK network
CN114598746A (en) * 2022-03-07 2022-06-07 中南大学 Method for optimizing load balancing performance between servers based on intelligent network card

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017142838A (en) * 2017-04-06 2017-08-17 華為技術有限公司Huawei Technologies Co.,Ltd. Resource allocation method of central processing unit and calculation node
CN110995616A (en) * 2019-12-06 2020-04-10 苏州浪潮智能科技有限公司 Management method and device for large-flow server and readable medium
CN112003797A (en) * 2020-07-16 2020-11-27 苏州浪潮智能科技有限公司 Method, system, terminal and storage medium for improving performance of virtualized DPDK network
CN114598746A (en) * 2022-03-07 2022-06-07 中南大学 Method for optimizing load balancing performance between servers based on intelligent network card

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陈英达;温柏坚;黄巨涛;林强;唐亮亮;: "广东电网企业云基础设施服务层建设技术规范研究", 微型电脑应用, no. 10, 19 October 2018 (2018-10-19), pages 15 - 18 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117880107A (en) * 2023-12-13 2024-04-12 天翼云科技有限公司 A method for adaptively adjusting parameters of hot-expanding virtual machine network card equipment

Also Published As

Publication number Publication date
CN115113817B (en) 2024-10-22

Similar Documents

Publication Publication Date Title
CN109451051B (en) Service request processing method and device, electronic equipment and storage medium
JP5088234B2 (en) Message association processing apparatus, method, and program
CN104834602B (en) A kind of program dissemination method, device and program delivery system
CN113760180A (en) Storage resource management method, device, equipment and computer readable storage medium
CN112346980B (en) Software performance testing method, system and readable storage medium
CN111694517B (en) Distributed data migration method, system and electronic equipment
CN108415772A (en) A kind of resource adjusting method, device and medium based on container
CN110347546B (en) Dynamic adjustment method, device, medium and electronic equipment for monitoring task
CN113204425B (en) Method, device, electronic equipment and storage medium for process management internal thread
CN113032102A (en) Resource rescheduling method, device, equipment and medium
CN114201413A (en) Automatic testing method and system and electronic equipment
CN110297743B (en) Load testing method and device and storage medium
CN113157411A (en) Reliable configurable task system and device based on Celery
CN115113817B (en) Storage optimization method and system based on network card, electronic equipment and storage medium
US11726893B2 (en) System for automatically evaluating a change in a large population of processing jobs
CN114567617A (en) IP address allocation method, system, electronic device and storage medium
CN113961353A (en) Task processing method and distributed system for AI task
CN114816766B (en) Computing resource allocation method and related components thereof
CN115617451B (en) Data processing method and data processing device
US9348667B2 (en) Apparatus for managing application program and method therefor
CN117827428A (en) Cloud service initialization method and system based on rule engine and token bucket algorithm
CN111147554A (en) Data storage method and device and computer system
CN112817687A (en) Data synchronization method and device
CN114816701B (en) Thread management method, electronic device and storage medium
CN115048107A (en) Code compiling method, system, electronic device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant