CN111124955B - Cache control method and equipment and computer storage medium - Google Patents
Cache control method and equipment and computer storage medium Download PDFInfo
- Publication number
- CN111124955B CN111124955B CN201811291613.XA CN201811291613A CN111124955B CN 111124955 B CN111124955 B CN 111124955B CN 201811291613 A CN201811291613 A CN 201811291613A CN 111124955 B CN111124955 B CN 111124955B
- Authority
- CN
- China
- Prior art keywords
- cache
- type
- data
- cache data
- storing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 37
- 230000015654 memory Effects 0.000 claims abstract description 45
- 230000004044 response Effects 0.000 claims abstract description 19
- 230000006870 function Effects 0.000 claims description 22
- 230000036316 preload Effects 0.000 claims description 13
- 230000002159 abnormal effect Effects 0.000 claims description 4
- 238000004590 computer program Methods 0.000 claims description 3
- 230000007246 mechanism Effects 0.000 description 12
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 230000003139 buffering effect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/12—Replacement control
- G06F12/121—Replacement control using replacement algorithms
- G06F12/123—Replacement control using replacement algorithms with age lists, e.g. queue, most recently used [MRU] list or least recently used [LRU] list
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Memory System Of A Hierarchy Structure (AREA)
Abstract
The application discloses a cache control method, cache control equipment and a computer storage medium, which are used for reducing response delay. The cache control method comprises the following steps: when the first type of cache data needs to be loaded, starting a preloading strategy; the pre-loading strategy is used for indicating a storage strategy for storing the first type of cache data, wherein the first type of cache data is cache data which needs to be acquired from an external memory and stored in the cache and cannot be replaced; and storing the first type of cache data in the cache according to a storage strategy indicated by the preloading strategy.
Description
Technical Field
The present application relates to the field of cache technologies, and in particular, to a cache control method, a cache control device, and a computer storage medium.
Background
In the running process of the central processing unit (Central Processing Unit, CPU), the instruction data stored in the Cache needs to be read, and if no CPU is currently required in the instruction data stored in the Cache, the CPU needs to exchange the instruction data stored in the external memory with the instruction data stored in the Cache.
Since the external memory generally has a slower reading speed, in order to hit more instruction data in the Cache rapidly, a least recently used (Least Recently Used, LRU) algorithm mechanism is adopted in the Cache to store the instruction data, that is, when the Cache is full, the instruction data meeting the "least recently used" is replaced from the Cache, and the first instruction data in the Cache is guaranteed to be accessed recently.
In some cases, however, individual instructions or data are rarely used, or even not used, but once used, are required to respond quickly. Such as some interrupt handling functions, exception state handling functions, etc. For such a case, the common LRU Cache has no corresponding instruction data because the instruction data is not used before, which requires to temporarily read the instruction data from an external memory, or causes response delay, which affects system performance.
Disclosure of Invention
The embodiment of the application provides a cache control method, cache control equipment and a computer storage medium, which are used for reducing response delay.
In a first aspect, there is provided a cache control method including:
if the first type of cache data needs to be loaded, starting a preloading strategy; the pre-loading strategy is used for indicating a storage strategy for storing the first type of cache data, wherein the first type of cache data is cache data which needs to be acquired from an external memory and stored in the cache and cannot be replaced;
and storing the first type of cache data in the cache according to a storage strategy indicated by the preloading strategy.
In the embodiment of the application, the first type of cache data, namely the cache data to be loaded, can be read from the external memory and then replaced with other cache data in the cache by adopting the preloading strategy, so that the cache data to be loaded is stored in the cache in advance, the CPU can directly obtain the cache when the cache data with lower response delay is required to run, and the cache data in the cache is not required to be replaced after the cache data is obtained from the external memory, thereby reducing the response delay.
Optionally, storing the first type of cache data in the cache according to a storage policy indicated by the preloading policy includes:
and storing the first type of cache data from the address 0x00 of the cache, wherein the least recently used LRU cache data is stored from the address 0x00 of the cache.
In the embodiment of the application, the first type of cache data is stored to preferentially cover the cache data of the LRU stored in the cache, and the least recently used cache data of the LRU has little influence on the operation of the CPU.
Optionally, before starting the preloading policy if the first type of cache data needs to be loaded, the method further includes:
configuring and storing a first address and a length of the first type of cache data;
determining that the first type of cache data occupies the storage space of the cache according to the configured address and length;
and establishing the preloading strategy, wherein the preloading strategy is to store the first type of cache data in the storage space from the address 0x00 of the cache.
In the embodiment of the application, the preloading strategy can determine the storage space of the cache to be occupied according to the size of the first type of cache data to be actually stored, and designate the initial storage address of the first type of cache data, thereby more meeting the actual requirements.
Optionally, after storing the first type of cache data in the cache according to the storage policy indicated by the preloading policy, the method further includes:
and locking the cache space for storing the first type of cache data so that the content of the cache space cannot be replaced.
Optionally, after locking the cache space storing the first type of cache data, the method further includes:
and if the first type of cache data is not needed or the first type of cache data is needed to be modified, unlocking the cache space for storing the first type of cache data.
In a second aspect, there is provided a cache control apparatus comprising:
the starting unit is used for starting a preloading strategy if the first type of cache data needs to be loaded; the pre-loading strategy is used for indicating a storage strategy for storing the first type of cache data, wherein the first type of cache data is cache data which needs to be acquired from an external memory and stored in the cache and cannot be replaced;
and the caching unit is used for storing the first type of cache data in the cache according to the storage strategy indicated by the preloading strategy.
Optionally, the buffer unit is specifically configured to:
and storing the first type of cache data from the address 0x00 of the cache, wherein the least recently used LRU cache data is stored from the address 0x00 of the cache.
Optionally, the method further comprises a building unit for:
configuring and storing a first address and a length of the first type of cache data;
determining that the first type of cache data occupies the storage space of the cache according to the configured address and length;
and establishing the preloading strategy, wherein the preloading strategy is to store the first type of cache data in the storage space from the address 0x00 of the cache.
Optionally, the device further comprises a locking unit for:
and locking the cache space for storing the first type of cache data so that the content of the cache space cannot be replaced.
Optionally, the locking unit is further configured to:
and if the first type of cache data is not needed or the first type of cache data is needed to be modified, unlocking the cache space for storing the first type of cache data.
In a third aspect, there is provided a cache control apparatus comprising:
at least one processor, and
a memory coupled to the at least one processor;
wherein the memory stores instructions executable by the at least one processor, the at least one processor implementing the method of any of the first aspects by executing the instructions stored by the memory.
In a fourth aspect, there is provided a computer storage medium having stored thereon a computer program which, when executed by a processor, implements the method according to any of the first aspects.
In the embodiment of the application, the first type of cache data, namely the cache data to be loaded, can be read from the external memory and then replaced with other cache data in the cache by adopting the preloading strategy, so that the cache data to be loaded is stored in the cache in advance, the CPU can directly obtain the cache when the cache data with lower response delay is required to run, and the cache data in the cache is not required to be replaced after the cache data is obtained from the external memory, thereby reducing the response delay.
Drawings
FIG. 1 is a flow chart of a cache control method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a cache control device according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of a cache control device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application.
At present, a least recently used (Least Recently Used, LRU) algorithm mechanism is adopted in the Cache to store instruction data, namely when the Cache is full in storage, the instruction data meeting the requirement of least recently used is replaced from the Cache, and the first instruction data in the Cache is guaranteed to be accessed recently. In some cases, however, individual instructions or data are rarely used, or even not used, but once used, are required to respond quickly. Such as some interrupt handling functions, exception state handling functions, etc. For such a case, the common LRU Cache has no corresponding instruction data because the instruction data is not used before, which requires to temporarily read the instruction data from an external memory, or causes response delay, which affects system performance.
In view of this, an embodiment of the present application provides a cache control method, where a preload policy is adopted in the method, where first type cache data, that is, cache data to be loaded, may be read from an external memory and then replaced with other cache data in a cache, so as to implement that cache data to be loaded is stored in the cache in advance, so that when a CPU needs to run cache data with a lower response delay, the CPU may directly obtain the cache, without need of replacing the cache data in the cache after obtaining the cache data from the external memory, thereby reducing the response delay.
The following describes the technical scheme provided by the embodiment of the application with reference to the attached drawings.
Referring to fig. 1, an embodiment of the present application provides a cache control method, and a specific flow of the cache control method is described below.
S101, starting a preloading strategy if first-class cache data need to be loaded; the pre-loading strategy is used for indicating a storage strategy for storing first type cache data, wherein the first type cache data is cache data which needs to be acquired from an external memory and stored in a cache and cannot be replaced;
because the memory space of the Cache is limited, the Cache is stored by adopting an LRU mechanism at present, so that instructions required by a CPU (Central processing Unit) are not likely to be stored in the Cache, such as response instructions corresponding to some interrupt processing functions, abnormal state processing functions and the like.
Therefore, in the embodiment of the application, a new caching mechanism is provided for the Cache, so that response instructions corresponding to some interrupt processing functions, abnormal state processing functions and the like are stored in the Cache, and the CPU can directly acquire the response instructions from the Cache without temporarily reading the response instructions from an external memory, thereby reducing response delay and improving system performance.
In the embodiment of the application, the cache data needing to be preloaded is called first-type cache data, and the first-type cache data is cache data which needs to be acquired from an external memory and stored in a cache and cannot be replaced.
Specifically, if the cache control device determines that the first type of cache data needs to be loaded, that is, the first type of cache data is stored in the cache in advance, a preload policy may be initiated, which may indicate a storage policy for storing the first type of cache data.
The embodiment of the application considers the influence on Cache data originally stored by the Cache and the influence on CPU operation as much as possible, and is based on the LRU storage mechanism of the Cache, namely the Cache data of the LRU starts to be stored from the bottom address (address 0x 00) of the Cache, namely some bottom addresses stored in the Cache. Therefore, the preloading strategy in the embodiment of the application indicates that the first type of cache data is stored from the bottom address of the cache, so that the first type of cache data replaces the cache data of the LRU to reduce the influence on the operation of the CPU as much as possible.
The cache control device may determine in advance whether the cache data in the cache needs to be replaced, so before starting the preloading policy, the cache control device may further configure a first address and a length for storing the first type of cache data in advance according to the first type of cache data, and determine that the first type of cache data occupies a storage space of the cache according to the configured address and length, thereby establishing the preloading policy to designate that the first type of cache data is stored in the determined storage space from a bottom address of the cache.
S102, storing the first type of cache data in a cache according to a storage strategy indicated by the preloading strategy.
After the Cache control device starts the preloading strategy, the Cache may store the first type of Cache data in the Cache according to the storage strategy indicated by the preloading strategy, that is, store the first type of Cache data in the determined storage space from the bottom address of the Cache.
After determining that the first type of Cache data is stored in the Cache, the Cache control device may lock a Cache space storing the first type of Cache data, so that contents of the Cache space cannot be replaced, and it is ensured that the first type of Cache data cannot be replaced as much as possible. In contrast, if the Cache data in the Cache needs to be replaced, that is, when other first-type Cache data needs to be loaded, the Cache control device performs an unlocking operation on a Cache space storing the first-type Cache data to load the other first-type Cache data.
In order to facilitate understanding, the technical scheme provided by the embodiment of the application is described below with respect to the LRU Cache.
Assuming that the LRU Cache memory space of a certain system is 2KByte, the Cache structure is shown in Table 1. The leftmost column in table 1 indicates addresses, both the address of instruction data and the instruction data are 32 bits, i.e., 4 bytes, so that the memory maximum address of the Cache space is 0xFF (2 x 1024/8). The first new address and data is put at address 0xFF and the second at address 0xFE and so on.
TABLE 1
If the Cache is just full, where, for example, address_0xFC of the instruction data is recently used, address_0xFC of the instruction data is moved up, as shown in Table 2.
TABLE 2
If the Cache is full, in the new data, the bottommost 0xFF data is removed according to the LRU algorithm mechanism, and the other address data 0x 00-0 xFE are collectively moved down to 0x 01-0 xFF, and the vacated 0x00 space is used for storing new addresses and data. For example, if the Cache is full and new instruction data a is available, the storage is as shown in table 3.
TABLE 3 Table 3
If the Cache is not full, in the new data, the free space from the bottom address is used for storing the new address and data according to the LRU algorithm mechanism. For example, if the Cache is not full and new instruction data a is available, the storage method is shown in table 4.
TABLE 4 Table 4
For tables 1-4 above, there may be three cases when the CPU accesses the Cache:
(1) Instruction data is in Cache, hit. Assuming that the Cache is full, hit is instruction data_0xFC, after the CPU takes the instruction data, 64bit data in address 0xFC in the Cache is moved to address 0x00 according to an LRU algorithm mechanism, the data before address 0xFC is moved downwards next, and the data after address 0xFC is unchanged, as shown in Table 2. If the Cache is not full (deposited to address Ox 20), the processing mechanism is the same, moving the 0xFC data up to address Ox20, the data before address 0xFC will move down next.
(2) Instruction data is not in the Cache and the Cache is full. At this time, the instruction data needs to be read out of the memory, and the new instruction data address is assumed to be a, and the instruction data is assumed to be a. According to the LRU algorithm mechanism, the new address and data added Cache structure is shown in Table 3.
(3) Instruction data is not in the Cache and the Cache is not full. In this case, the instruction data needs to be read out of the memory, and the new instruction data address is a, and the instruction data is a and stored to the address 0xFC. According to the LRU algorithm mechanism, the new address and data added Cache structure is shown in Table 4.
And the Cache structure is shown in table 5 by adopting the storage mechanism determined by the Cache control method in the embodiment of the application.
TABLE 5
The LRU Cache structure in the embodiment of the present application is shown in table 5, and includes an LRU Cache and a Cache configuration module (not shown in table 5), where the Cache configuration module may be understood as an enabling module for starting a preload policy, and the Cache configuration module may set an enabling signal en_bit of the preload policy to 0, i.e. close the preload function, where the function of the LRU Cache in the embodiment of the present application is the same as that of a general LRU Cache. If En_bit is not 0, a preload function may be started, and the function of the LRU Cache in the embodiment of the application is different from that of the common LRU Cache.
If the pre-loaded LRU Cache function needs to be started, the instruction data address and length (unit WORD,4 byte) that need to be pre-loaded need to be configured in the Cache configuration module, and then the en_bit=1' b1 is further made. The pre-loaded instruction data address and length can be configured and divided into a plurality of sections or only one section. Assuming that a system has 2 instructions or data that need to be preloaded, address 1 is 0xC in length and address 2 is 0x4 in length, then a total of the data or instructions that need to be preloaded are 16 x4 bytes. Configuring en_bit=1' b1, starting the Cache preloading function. According to the configuration of the address and the length, the address of the Cache space which needs to be occupied is 16 (F0-FF), and at the moment, the Cache space is locked and used for storing preloaded instructions or data. The preloaded instruction or data is placed at the bottom of the Cache instead of the top, and it is the bottom content that is least recently used according to the LRU principle, and the direct coverage of the bottom content has little effect.
After space locking, the Cache control device reads corresponding instruction data from the external memory according to the configured address and length, and stores the instruction data in the locked Cache space, as shown in table 5. Since the locked space is used to store the preloaded content, it cannot be changed, so the lock Flag of the Cache is changed from the location OxFF to 0xEF. The 0x 00-0 xEF space of the Cache is still the same as the function of the common LRU Cache. If some instruction and data need to be preloaded after other instruction and data need to be preloaded, the En_bit=1' b0 needs to be executed first, and after the bottom_flag is restored to OxFF, new instruction and data need to be preloaded.
In summary, in the embodiment of the present application, the preloading policy is adopted to read the first type of cache data, that is, the cache data to be loaded, from the external memory first and then replace the first type of cache data with other cache data in the cache, so as to realize that the cache data to be loaded is stored in the cache in advance, so that the CPU can directly obtain the cache when the CPU needs to run the cache data with lower response delay, and does not need to replace the cache data in the cache after the cache data is obtained from the external memory, thereby reducing response delay.
The apparatus provided by the embodiments of the present application will be described below with reference to the accompanying drawings.
Referring to fig. 2, based on the same inventive concept, an embodiment of the present application provides a cache control apparatus including a start-up unit 201 and a cache unit 202. Wherein:
the starting unit 201 may be configured to start a preloading policy if the first type of cache data needs to be loaded; the pre-loading strategy is used for indicating a storage strategy for storing the first type of cache data, wherein the first type of cache data is cache data which needs to be acquired from an external memory and stored in the cache and cannot be replaced;
the caching unit 202 may be configured to store the first type of cache data in the cache according to a storage policy indicated by the preloading policy.
Optionally, the cache unit 202 is specifically configured to:
the first type of cache data is stored starting at address 0x00 of the cache, wherein the least recently used LRU cache data is stored starting at address 0x00 of the cache.
Optionally, the method further comprises a building unit for:
configuring a first address and a length for storing first-class cache data;
determining that the first type of cache data occupies the storage space of the cache according to the configured address and length;
a preload strategy is established, wherein the preload strategy is to store the first type of cache data in the storage space starting from address 0x00 of the cache.
Optionally, the device further comprises a locking unit for:
the cache space storing the first type of cache data is locked such that the contents of the cache space cannot be replaced.
Optionally, the locking unit is further configured to:
and if the first type of cache data is not needed or the first type of cache data is needed to be modified, unlocking the cache space for storing the first type of cache data.
The cache control device may be used to perform the method provided by the embodiment shown in fig. 1. Therefore, for the functions that can be implemented by the functional modules in the device, reference may be made to the corresponding descriptions in the embodiment shown in fig. 1, which are not repeated.
Referring to fig. 3, based on the same inventive concept, an embodiment of the present application provides a cache control device, where a terminal device may include: at least one processor 301, where the processor 301 is configured to implement the steps of the cache control method as shown in fig. 1 provided in the embodiment of the present application when executing the computer program stored in the memory: when the first type of cache data needs to be loaded, starting a preloading strategy; the pre-loading strategy is used for indicating a storage strategy for storing the first type of cache data, wherein the first type of cache data is cache data which needs to be acquired from an external memory and stored in the cache and cannot be replaced; and storing the first type of cache data in the cache according to a storage strategy indicated by the preloading strategy.
Optionally, the processor 301 is specifically configured to:
and storing the first type of cache data from the address 0x00 of the cache, wherein the least recently used LRU cache data is stored from the address 0x00 of the cache.
Optionally, the processor 301 is further configured to:
configuring a first address and a length for storing first-class cache data;
determining that the first type of cache data occupies the storage space of the cache according to the configured address and length;
a preload strategy is established, wherein the preload strategy is to store the first type of cache data in the storage space starting from address 0x00 of the cache.
Optionally, the processor 301 is further configured to:
the cache space storing the first type of cache data is locked such that the contents of the cache space cannot be replaced.
Optionally, the processor 301 is further configured to:
and if the first type of cache data is not needed or the first type of cache data is needed to be modified, unlocking the cache space for storing the first type of cache data.
Alternatively, the processor 301 may be a central processing unit, an application specific integrated circuit (english: application Specific Integrated Circuit, abbreviated as ASIC), or one or more integrated circuits for controlling the execution of a program.
Optionally, the cache control device further includes a Memory 302 connected to the at least one processor, where the Memory 302 may include a Read Only Memory (ROM), a random access Memory (Random Access Memory, RAM), and a disk Memory. The memory 302 is used for storing data required for the operation of the processor 301, i.e. instructions executable by the at least one processor 301, the at least one processor 301 performing the method as shown in fig. 1 by executing the instructions stored by the memory 302. Wherein the number of memories 302 is one or more. The memory 302 is shown in fig. 3, but it should be noted that the memory 302 is not an essential functional block, and is therefore shown in fig. 3 by a broken line.
The physical devices corresponding to the starting unit 201 and the buffering unit 202 may be the aforementioned processor 301. The cache control device may be used to perform the method provided by the embodiment shown in fig. 1. Therefore, for the functions that can be implemented by the functional modules in the device, reference may be made to the corresponding descriptions in the embodiment shown in fig. 1, which are not repeated.
Embodiments of the present application also provide a computer storage medium storing computer instructions that, when executed on a computer, cause the computer to perform a method as described in fig. 1.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional modules is illustrated, and in practical application, the above-described functional allocation may be performed by different functional modules according to needs, i.e. the internal structure of the apparatus is divided into different functional modules to perform all or part of the functions described above. The specific working processes of the above-described systems, devices and units may refer to the corresponding processes in the foregoing method embodiments, which are not described herein.
In the several embodiments provided by the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical functional division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a universal serial bus flash disk (Universal Serial Bus flash disk), a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk or an optical disk, or other various media capable of storing program codes.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present application without departing from the spirit or scope of the application. Thus, it is intended that the present application also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.
Claims (6)
1. A method of cache control, comprising:
configuring a first address and a length for storing first-class cache data;
determining that the first type of cache data occupies the storage space of the cache according to the configured address and length;
establishing a preloading strategy, wherein the preloading strategy is used for storing the first type of cache data in the storage space from the address 0x00 of the cache;
if the first type of cache data needs to be loaded, starting a preloading strategy; the pre-loading strategy is used for indicating a storage strategy for storing the first type of cache data, wherein the first type of cache data is cache data which needs to be acquired from an external memory and stored in the cache and cannot be replaced; the first type of cache data is instruction data; the instruction data comprises response instructions corresponding to an interrupt processing function and an abnormal state processing function;
storing the first type of cache data in the cache according to a storage strategy indicated by the preloading strategy; wherein the storing the first type of cache data in the cache according to the storage policy indicated by the preload policy includes: and storing the first type of cache data from the address 0x00 of the cache, wherein the least recently used LRU cache data is stored from the address 0x00 of the cache.
2. The control method of claim 1, wherein after storing the first type of cache data in the cache according to the storage policy indicated by the preload policy, further comprising:
and locking the cache space for storing the first type of cache data so that the content of the cache space cannot be replaced.
3. The control method according to claim 2, further comprising, after locking a cache space storing the first type of cache data:
and if the first type of cache data is not needed or the first type of cache data is needed to be modified, unlocking the cache space for storing the first type of cache data.
4. A cache control apparatus, comprising:
the establishing unit is used for configuring and storing a first address and a length of first-class cache data; determining that the first type of cache data occupies the storage space of the cache according to the configured address and length; establishing a preloading strategy, wherein the preloading strategy is used for storing the first type of cache data in the storage space from the address 0x00 of the cache;
the starting unit is used for starting a preloading strategy if the first type of cache data needs to be loaded; the pre-loading strategy is used for indicating a storage strategy for storing the first type of cache data, wherein the first type of cache data is cache data which needs to be acquired from an external memory and stored in the cache and cannot be replaced; the first type of cache data is instruction data; the instruction data comprises response instructions corresponding to an interrupt processing function and an abnormal state processing function;
the caching unit is used for storing the first type of cache data in the cache according to a storage strategy indicated by the preloading strategy; wherein the storing the first type of cache data in the cache according to the storage policy indicated by the preload policy includes: and storing the first type of cache data from the address 0x00 of the cache, wherein the least recently used LRU cache data is stored from the address 0x00 of the cache.
5. A cache control apparatus, comprising:
at least one processor, and
a memory coupled to the at least one processor;
wherein the memory stores instructions executable by the at least one processor, the at least one processor implementing the method of any of claims 1-3 by executing the memory stored instructions.
6. A computer storage medium having stored thereon a computer program which, when executed by a processor, implements the method of any of claims 1-3.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811291613.XA CN111124955B (en) | 2018-10-31 | 2018-10-31 | Cache control method and equipment and computer storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811291613.XA CN111124955B (en) | 2018-10-31 | 2018-10-31 | Cache control method and equipment and computer storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111124955A CN111124955A (en) | 2020-05-08 |
CN111124955B true CN111124955B (en) | 2023-09-08 |
Family
ID=70494489
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811291613.XA Active CN111124955B (en) | 2018-10-31 | 2018-10-31 | Cache control method and equipment and computer storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111124955B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114968076A (en) * | 2021-02-25 | 2022-08-30 | 华为技术有限公司 | Method, apparatus, medium, and program product for storage management |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5983324A (en) * | 1996-03-28 | 1999-11-09 | Hitachi, Ltd. | Data prefetch control method for main storage cache for protecting prefetched data from replacement before utilization thereof |
US6253306B1 (en) * | 1998-07-29 | 2001-06-26 | Advanced Micro Devices, Inc. | Prefetch instruction mechanism for processor |
CN1410893A (en) * | 2002-04-09 | 2003-04-16 | 智慧第一公司 | Microprocessor with pre-GET and method for pregetting to cache memory |
CN101558391A (en) * | 2006-12-15 | 2009-10-14 | 密克罗奇普技术公司 | Configurable cache for a microprocessor |
CN106164875A (en) * | 2014-04-04 | 2016-11-23 | 高通股份有限公司 | Carry out adaptivity cache prefetch to reduce cache pollution based on the special strategy that prefetches of the competitiveness in private cache group |
CN106227676A (en) * | 2016-09-22 | 2016-12-14 | 大唐微电子技术有限公司 | A kind of cache and the method and apparatus reading data from cache |
CN106416277A (en) * | 2014-06-04 | 2017-02-15 | 索尼公司 | Transmission device, transmission method, reception device, and reception method |
CN107911799A (en) * | 2017-05-18 | 2018-04-13 | 北京聚通达科技股份有限公司 | A kind of method using Intelligent routing |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8806177B2 (en) * | 2006-07-07 | 2014-08-12 | International Business Machines Corporation | Prefetch engine based translation prefetching |
CN102486753B (en) * | 2009-11-30 | 2015-09-16 | 国际商业机器公司 | Build and allow the method for access cache, equipment and storage system |
US9122613B2 (en) * | 2013-03-07 | 2015-09-01 | Arm Limited | Prefetching of data and instructions in a data processing apparatus |
-
2018
- 2018-10-31 CN CN201811291613.XA patent/CN111124955B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5983324A (en) * | 1996-03-28 | 1999-11-09 | Hitachi, Ltd. | Data prefetch control method for main storage cache for protecting prefetched data from replacement before utilization thereof |
US6253306B1 (en) * | 1998-07-29 | 2001-06-26 | Advanced Micro Devices, Inc. | Prefetch instruction mechanism for processor |
CN1410893A (en) * | 2002-04-09 | 2003-04-16 | 智慧第一公司 | Microprocessor with pre-GET and method for pregetting to cache memory |
CN101558391A (en) * | 2006-12-15 | 2009-10-14 | 密克罗奇普技术公司 | Configurable cache for a microprocessor |
CN106164875A (en) * | 2014-04-04 | 2016-11-23 | 高通股份有限公司 | Carry out adaptivity cache prefetch to reduce cache pollution based on the special strategy that prefetches of the competitiveness in private cache group |
CN106416277A (en) * | 2014-06-04 | 2017-02-15 | 索尼公司 | Transmission device, transmission method, reception device, and reception method |
CN106227676A (en) * | 2016-09-22 | 2016-12-14 | 大唐微电子技术有限公司 | A kind of cache and the method and apparatus reading data from cache |
CN107911799A (en) * | 2017-05-18 | 2018-04-13 | 北京聚通达科技股份有限公司 | A kind of method using Intelligent routing |
Non-Patent Citations (1)
Title |
---|
李勇 ; 王冉 ; 冯丹 ; 施展 ; .一种适用于异构存储系统的缓存管理算法.计算机研究与发展.2016,1953-1963. * |
Also Published As
Publication number | Publication date |
---|---|
CN111124955A (en) | 2020-05-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111221476B (en) | Front-end command processing method and device for improving SSD performance, computer equipment and storage medium | |
US20080086599A1 (en) | Method to retain critical data in a cache in order to increase application performance | |
JP5292978B2 (en) | Control apparatus, information processing apparatus, and memory module recognition method | |
US10324760B2 (en) | Leases for blocks of memory in a multi-level memory | |
US8417889B2 (en) | Two partition accelerator and application of tiered flash to cache hierarchy in partition acceleration | |
CN110968529A (en) | Method and device for realizing non-cache solid state disk, computer equipment and storage medium | |
EP2911065B1 (en) | Distributed procedure execution and file systems on a memory interface | |
US9552303B2 (en) | Method and system for maintaining release consistency in shared memory programming | |
KR20050056221A (en) | System and method for preferred memory affinity | |
EP2911062A1 (en) | Method and device for adjusting cache block length of cache memory | |
US6684267B2 (en) | Direct memory access controller, and direct memory access control method | |
CN111124955B (en) | Cache control method and equipment and computer storage medium | |
US20070226382A1 (en) | Method for improving direct memory access performance | |
US10216634B2 (en) | Cache directory processing method for multi-core processor system, and directory controller | |
US7225313B2 (en) | Demotion of memory pages to largest possible sizes | |
CN101599049B (en) | Method for controlling discontinuous physical addresses of DMA access and DMA controller | |
CN116670661A (en) | Cache access method of graphics processor, graphics processor and electronic device | |
CN101441551A (en) | Computer, external memory and method for processing data information in external memory | |
US20040162942A1 (en) | Computer system embedding sequential buffers therein for improving the performance of a digital signal processing data access operation and a method thereof | |
JP4431492B2 (en) | Data transfer unit that supports multiple coherency granules | |
CN108614782B (en) | Cache access method for data processing system | |
JP4528491B2 (en) | Information processing device | |
CN114327281B (en) | TCG software and hardware acceleration method and device for SSD, computer equipment and storage medium | |
JP2007041813A (en) | Information processing system and information processing method | |
US7213142B2 (en) | System and method to initialize registers with an EEPROM stored boot sequence |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |