[go: up one dir, main page]

CN115277596B - Cache distribution system based on multiple priorities - Google Patents

Cache distribution system based on multiple priorities Download PDF

Info

Publication number
CN115277596B
CN115277596B CN202211154581.5A CN202211154581A CN115277596B CN 115277596 B CN115277596 B CN 115277596B CN 202211154581 A CN202211154581 A CN 202211154581A CN 115277596 B CN115277596 B CN 115277596B
Authority
CN
China
Prior art keywords
information
cache
state
request information
receiving end
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211154581.5A
Other languages
Chinese (zh)
Other versions
CN115277596A (en
Inventor
请求不公布姓名
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Muxi Integrated Circuit Shanghai Co ltd
Original Assignee
Muxi Integrated Circuit Shanghai Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Muxi Integrated Circuit Shanghai Co ltd filed Critical Muxi Integrated Circuit Shanghai Co ltd
Priority to CN202211154581.5A priority Critical patent/CN115277596B/en
Publication of CN115277596A publication Critical patent/CN115277596A/en
Application granted granted Critical
Publication of CN115277596B publication Critical patent/CN115277596B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/625Queue scheduling characterised by scheduling criteria for service slots or service orders
    • H04L47/6275Queue scheduling characterised by scheduling criteria for service slots or service orders based on priority

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The invention relates to a cache distribution system based on multiple priorities, which comprises: the system comprises a buffer memory and at least one state updater, wherein one end of the buffer memory is connected with M sending ends, the other end of the buffer memory is connected with N receiving ends, the buffer memory is used for caching request information sent by the sending ends, and the sending ends correspond to P priorities; the state updater is respectively connected with the buffer memory and the receiving ends and is used for storing the buffer address information of each current buffer request information, the corresponding sending end priority information, the receiving end identification information and the current state information of each receiving end; the buffer memory determines cache request information to be sent and a corresponding receiving end based on the information currently stored in the state updater, and sends the cache request information to be sent to the corresponding receiving end. The invention avoids the blocking of high-priority request information in cache distribution and improves the cache distribution efficiency.

Description

Cache distribution system based on multiple priorities
Technical Field
The invention relates to the technical field of computers, in particular to a cache distribution system based on multiple priorities.
Background
In a scenario of processing request information, a plurality of sending terminals generally send request information to a plurality of receiving terminals, and in a general situation, a receiving terminal generally needs a certain time to process a received request and cannot receive a new request in a processing process. However, different sending ends have different response rates, some of which are slow and some of which are fast, and if the request information of the channels is cached in one FIFO, the request information of different sending ends is easily blocked, and the cache distribution efficiency is low. If a FIFO is set for each transmitting end for buffering, a large amount of area is occupied, resources are wasted, and performance is reduced. Therefore, how to provide a reasonable cache distribution technology and improve the cache distribution efficiency becomes an urgent technical problem to be solved.
In addition, if the receiving end is a cache, and when an application scenario is that multiple sending ends request information to multiple caches, after cache distribution, if the sending ends with different response rates are connected to the same cache terminal, the cache hit rate is reduced, so that the processing efficiency of the request information is affected, and therefore, how to improve the processing efficiency of sending the request information to the multiple caches by the multiple sending ends is also a technical problem to be solved urgently.
Disclosure of Invention
The invention aims to provide a multi-priority-based cache distribution system, which avoids the blockage of high-priority request information in cache distribution and improves the cache distribution efficiency.
The invention provides a cache distribution system based on multiple priorities, which comprises: a buffer memory and at least one state updater, wherein,
one end of the buffer memory is connected with M sending ends, the other end of the buffer memory is connected with N receiving ends, the buffer memory is used for caching request information sent by the M sending ends, the M sending ends correspond to P priorities, P is less than or equal to N, and each priority corresponds to at least one sending end;
the state updater is respectively connected with the buffer memory and the N receiving ends and is used for storing the buffer address information, the corresponding sending end priority information and the receiving end identification information of each current buffer request information in the buffer memory, and the current state information of each receiving end, wherein the corresponding state information is in a non-idle state in the process of processing the request information by the receiving end, and the corresponding state information is in an idle state in the process of not processing the request information by the receiving end;
and the buffer memory takes the highest priority of the current cache request information which can be sent out as the cache request information to be sent and determines the corresponding receiving end based on the information which is currently stored in the state updater, and sends the cache request information to be sent to the corresponding receiving end.
Compared with the prior art, the invention has obvious advantages and beneficial effects. By means of the technical scheme, the cache distribution system based on multiple priorities can achieve considerable technical progress and practicability, has wide industrial utilization value, and at least has the following advantages:
the system of the invention is matched with at least one state updater through one buffer memory, so that the sending end with high priority is only influenced by the state of the receiving end and is not blocked by the sending end with low priority, and the state updater only stores a small amount of information without occupying a large amount of physical space, thereby realizing reasonable distribution of the buffer requests of the receiving ends with different priorities, avoiding the blocking of the request information with high priority in the buffer distribution and improving the buffer distribution efficiency.
The foregoing description is only an overview of the technical solutions of the present invention, and in order to make the technical means of the present invention more clearly understood, the present invention may be implemented in accordance with the content of the description, and in order to make the above and other objects, features, and advantages of the present invention more clearly understood, the following preferred embodiments are described in detail with reference to the accompanying drawings.
Drawings
Fig. 1 is a schematic diagram of a multi-priority-based cache distribution system according to an embodiment of the present invention;
fig. 2 is a schematic diagram of a cache query system based on multiple priorities according to an embodiment of the present invention.
Detailed Description
To further illustrate the technical means and effects of the present invention for achieving the predetermined purpose, the following detailed description will be given with reference to the accompanying drawings and preferred embodiments of a multi-priority-based cache distribution system and a multi-priority-based cache query system according to the present invention.
The first embodiment,
An embodiment provides a multi-priority-based cache distribution system, as shown in fig. 1, including: the buffer memory includes only one input port and one output port, and only one request message can be stored and output in one cycle, where one cycle may be one clock cycle.
One end of the buffer memory is connected with M sending ends, the other end of the buffer memory is connected with N receiving ends, the buffer memory is used for caching the request information sent by the M sending ends, the M sending ends correspond to P priorities, P is less than or equal to N, and each priority corresponds to at least one sending end. It should be noted that the sending end may be a sending end with different response levels, and the higher the response rate is, the higher the response level is, the higher the real-time requirement is, and the higher the corresponding priority is. It can be understood that the sending end and the receiving end are determined according to a specific application scenario, the sending end may be specifically a GPU core, a DMA (Direct Memory Access) end, and the like, and the receiving end may be specifically a cache terminal, and the like. In general, the buffer memory is set to match the rates, bandwidths, and the like of a plurality of transmitting ends and a plurality of receiving ends. The specific size of the buffer memory is determined according to factors such as the number of transmitting ends, the transmitting rate, the transmitting frequency, the number of receiving ends, the receiving rate and the like.
The state updater is respectively connected with the buffer memory and the N receiving ends and is used for storing the cache address information of each current cache request information in the buffer memory, the corresponding sending end priority information, the receiving end identification information and the current state information of each receiving end. It should be noted that the priority information of the sending end may be set separately, or the priority corresponding to the identification information of the sending end may be directly specified, and the priority information is directly represented by the identification information of the receiving end. The receiving end is in a non-idle state during processing the request information, and is in an idle state during not processing the request information, and it can be understood that when the receiving end receives a distributed request information, it starts processing the request information, and at this time, it enters the non-idle state, and when the processing is finished, it automatically switches to the idle state.
It should be noted that the request information sent by the sending end includes sending end identification information, sending end priority information, receiving end identification information, and request data information, where the request data information is content information of a specific request. The request data corresponding to the request information is cached in the buffer memory, and only a small amount of request information and a small amount of identification information corresponding to the receiving end are stored in the state updater, so that the bit width of the buffer memory is far greater than that of the state updater.
And the buffer memory takes the highest priority of the current cache request information which can be sent out as the cache request information to be sent and determines the corresponding receiving end based on the information which is currently stored in the state updater, and sends the cache request information to be sent to the corresponding receiving end.
As an example, in the i-th cycle:
the buffer memory is used for sending the cache request information to be sent determined in the (i-1) th period to the corresponding receiving end, and the corresponding receiving end is converted from an idle state to a non-idle state.
The buffer memory is also used for acquiring request information with the highest current priority from M sending ends and storing the request information into the buffer memory.
The state updater is used for acquiring cache address information, corresponding sending end priority information and receiving end identification information of the cache request information stored in the ith period from the cache memory and updating the cache address information, the corresponding sending end priority information and the receiving end identification information into the state updater; it can be understood that the cache request information stored in the ith cycle is the request information with the highest current priority, which is obtained by the cache memory from the M sending ends in the ith cycle.
And the state updater is also used for acquiring the current state information of the N receiving ends and updating the current state information into the state updater.
And the state updater is further used for determining the cache request information to be sent and the corresponding receiving end in the (I + 1) th period based on the updated information in the ith period, wherein the value range of I is from 1 to I, and I is the total period number.
The buffer memory can distribute the request information with high priority as far as possible by matching with the state information in the state updater, so that the request information of the sending end with low priority is prevented from blocking the request information with high priority.
As an embodiment, the status updater is configured to perform the following steps:
s1, determining the identification information of a sending end in an idle state at present based on the state information of the receiving end updated in the ith period;
s2, determining candidate cache request information based on the identification information of the sending end currently in an idle state;
and S3, selecting the cache request information with the highest priority of the sending end from the candidate cache request information as the cache request information to be sent in the (i + 1) th cycle.
The following describes a process of determining cache request information to be sent by a status updater according to several specific embodiments:
the first embodiment,
The step S2 includes:
step S211, determining all the cache request information corresponding to the sending end identification information currently in the idle state in the state updater as candidate cache request information.
The step S3 includes:
step S311, determining whether the number of the cache request information with the highest priority at the sending end in the candidate cache request information is greater than 1, if so, executing step S312, and if so, executing step S313;
step S312, randomly selecting a cache request message with the highest priority of the sending end from the candidate cache request messages as the cache request message to be sent in the (i + 1) th cycle;
step S313, directly determining the cache request information with the highest priority at the sending end as the cache request information to be sent in the (i + 1) th cycle.
By means of the first embodiment, a cache request message with the highest priority of a sending end can be randomly selected from cache request messages corresponding to the sending end currently in an idle state to serve as a cache request message to be sent in the (i + 1) th cycle, and it can be understood that in a specific application scenario, the number of request messages is usually large, and therefore, although the selection is random, a more balanced selection probability can be achieved.
The second embodiment,
The step S2 includes:
step S221, the identification information of the current idle transmitter is used as a target receiver;
as an example, in step S221, a receiving end may be specifically selected as a target receiving end from the transmitting end identification information currently in the idle state by using a time slice polling algorithm.
Each receiver can have equal chance to be selected as a target receiver through the step S221.
Step S222, determining the cache request information corresponding to the target receiving end in the state updater as candidate cache request information.
The step S3 includes:
step S311, determining whether the number of the cache request information with the highest priority at the sending end in the candidate cache request information is greater than 1, if so, executing step S312, and if so, executing step S313;
step S312, randomly selecting a cache request message with the highest priority of the sending end from the candidate cache request messages as the cache request message to be sent in the (i + 1) th cycle;
step S313, directly determining the cache request information with the highest priority at the sending end as the cache request information to be sent in the (i + 1) th cycle.
The third embodiment,
The step S2 includes:
step S211, determining all the cache request information corresponding to the sending end identification information currently in the idle state in the state updater as candidate cache request information.
The step S3 includes:
step S321, determining whether the number of the cache request messages with the highest priority at the sending end in the candidate cache request messages is greater than 1, if so, executing step S322, and if so, executing step S323;
step S322, determining the cache request information stored in the cache memory firstly in the candidate cache request information as the cache request information to be sent in the (i + 1) th cycle;
step S323, directly determining the cache request information with the highest priority at the sending end as the cache request information to be sent in the (i + 1) th cycle.
The fourth embodiment,
The step S2 includes:
step S221, the identification information of the current idle transmitter is used as a target receiver;
as an example, in step S221, a receiver may be specifically selected as a target receiver from the sender identification information currently in the idle state by using a time slice polling algorithm.
Each receiver can have equal opportunity to be selected as a target receiver through the step S221.
Step S222, determining the cache request information corresponding to the target receiving end in the state updater as candidate cache request information.
The step S3 includes:
step S321, determining whether the number of the cache request messages with the highest priority at the sending end in the candidate cache request messages is greater than 1, if so, executing step S322, and if so, executing step S323;
step S322, determining the cache request information firstly stored in the cache memory in the candidate cache request information as the cache request information to be sent in the (i + 1) th cycle;
step S323, directly determining the cache request information with the highest priority at the sending end as the cache request information to be sent in the (i + 1) th cycle.
It can be understood that, in some application requirements, there is a need to trap request information of the same priority into first-out, and the buffer request information to be sent in the (i + 1) th cycle can be selected according to the priority stored in the buffer memory through the third embodiment and the fourth embodiment.
It should be noted that, only four embodiments are listed above, other similar combinations, and other implementations and combinations that can be obtained by those skilled in the art based on the above embodiments also fall within the protection scope of the present invention, and are not listed here.
As an embodiment, the system includes a status updater, where a status update table is stored in the status updater, and the status update table is used to store the cache address information of each cache request information in the cache memory, the corresponding sending end priority information, the receiving end identification information, and the current status information of each receiving end, which are stored in the status update table.
As an embodiment, the system includes N status updaters, each status updater corresponds to one receiving end, and each status updater correspondingly stores one status update table, where each status update table is used to store current status information of the corresponding receiving end and cache address information and corresponding sending end priority information of cache request information in the cache memory corresponding to the receiving end corresponding to the status updater in the current cache memory.
It should be noted that, the status update table may record time in each record, or mark a sequence number according to time sequence, or directly determine the sequence of the request information stored in the buffer memory according to the sequence of the stored records. When one request message is selected as the cache request message to be sent in the (i + 1) th cycle, after the request message is sent in the (i + 1) th cycle, the corresponding request message is deleted from the corresponding cache memory, and the corresponding record message is also deleted from the corresponding status update table.
In the system of the first embodiment, a buffer memory is matched with at least one state updater, so that a high-priority sending end is only affected by the state of a receiving end and is not blocked by a low-priority sending end, and the state updater only stores a small amount of information and does not occupy a large amount of physical space, thereby realizing reasonable distribution of cache requests of the receiving ends with different priorities, avoiding blocking of high-priority request information in cache distribution and improving cache distribution efficiency.
Example II,
If the receiving end is a cache receiving end, and the application scenario is that multiple sending ends request information to multiple caches, after cache distribution, if the sending ends with different response rates are connected with the same cache terminal in a butt joint mode, cache conflict exists, the cache hit rate can be reduced, especially the hit rate of the sending end with high priority can be influenced by the sending end with low priority, if multiple groups of cache terminals are arranged, although the influence of the cache hit rates of different sending ends can be avoided, the hardware area and resources can be greatly increased, and the method is obviously not a good solution. Based on this, the second embodiment,
The second embodiment provides a cache query system based on multi-priority, as shown in fig. 2, including a first buffer module and N cache receiving terminals, wherein,
one end of the first buffer module is connected with M sending ends, and the other end of the first buffer module is connected with N cache receiving ends { C 1 ,C 2 ,…C N Are connected, C i The value range of i is 1 to N, the first cache module is used for caching request information sent by the M sending ends and distributing the request information to the N cache receiving ends, and the M sending ends correspond to P priorities { W } 1 ,W 2 ,…W P },W 1 ,W 2 ,…W P The priority levels of the two priority levels are sequentially reduced, P is less than or equal to N, each priority level corresponds to at least one transmitting end, { W 1 ,W 2 ,…W P Divide into Q priority groups { WR 1 ,WR 2 ,…WR Q },Q ≤P ,WR q-1 Is higher than WR q The highest priority in (1), the value range of Q is from 1 to Q; it should be noted that the first buffer module may directly use the buffer structure formed by the buffer memory and the state updater in the first embodiment, or use other buffer structures, and the buffer structure in the first embodiment is not described herein again.
Each C i Corresponding to an address request range Z i Different from C i Corresponding to Z i Non-overlapping, Z i The corresponding cache comprises Q independent cache regions { CX i1 ,CX i2 ,…CX iQ },CX i1 ,CX i2 ,…CX iQ Physically isolated from each other, and the sizes of the corresponding cache regions are sequentially reduced, CX iq The corresponding address request ranges are all Z i When the first buffer module determines the corresponding C for the cache request information to be sent i In time, WR corresponding to priority of sending end based on cache request information q Determining a corresponding CX iq Sending the cache request information to be sent to the corresponding CX iq Is processed.
It should be noted that, N cache receiving ends { C } 1 ,C 2 ,…C N The other end is connected to a memory, typically a High Bandwidth Memory (HBM), each C i Corresponding to an address request range Z i ,C 1 ,C 2 ,…C N The maximum total range of the corresponding address requests is the range of the memory, different C i Corresponding to Z i There is no overlap.
As an example, if p ≦ px, and W p When the corresponding sending end and the sending ends corresponding to other priorities share one cache receiving end, when the difference value of the cache hit rates of the cache receiving ends which are shared independently is larger than the preset hit rate difference value threshold value, the W is compared with the W p Is divided into a priority group separately, wherein px is a preset priority identification threshold value, px<And P. It should be noted that, in a general case, a high-priority sending end has a high response level, a fast response rate, a high real-time requirement, and a low priority level more easily affects a cache hit rate with a high priority level, so that an independent cache region needs to be mainly set for the high-priority sending end. And the response rate of the low priority is slower, and the real-time requirement is low, so that the cache hit rate is not easily influenced by other priority sending ends.
As an example, Q =2,wr 1 In (1) only includes W 1 ,W 1 And the corresponding sending end only shares one cache area in each cache terminal. WR (write pulse Width modulation) 2 Includes { W 2 ,W 3 ,…W P },W 2 ,W 3 ,…W P And the corresponding sending ends share one cache area in each cache terminal. Therefore, the cache area can be solely divided for the highest priority, and the cache hit rate of the sending end with the highest priority is improved.
As an embodiment, if P is smaller than a preset threshold, Q = P, there is one priority in each priority group, and when there are fewer priority groups, for example, only 3 priority groups, an independent cache region may be set for each priority group.
As an embodiment, when request information enters cache and target data is not searched, a request is sent to a memory, data is obtained through the memory, multiple requests sent to the memory may occur in the same time period, based on the fact that the system further comprises a second buffer module, one end of the second buffer module is connected with N cache receiving ends, the other end of the second buffer module is connected with the memory, and the second buffer module is used for caching the request information sent by the N cache receiving ends and distributing the request information to the memory. The second buffer module comprises Q buffer FIFOs { F 1 ,F 2 ,…F Q },F q For receiving all CXs iq And outputting the request information. Therefore, the memory request information corresponding to the transmitting and receiving end corresponding to the same priority level group can be stored in a queue. F 1 ,F 2 ,…F Q Sending request information to the memory with priority from high to low, wherein the second buffer module sends { F } 1 ,F 2 ,…F Q F with request information stored therein and highest priority q The request information in (2) is sent to the memory.
In the second embodiment, the system performs priority grouping on different priorities, and divides each cache receiving end into different cache areas, so that the cache areas corresponding to the requests of the sending ends corresponding to the different priority groups are physically separated, and data in the corresponding cache areas cannot be replaced with each other, thereby increasing the cache hit rate, meeting the requirements of space limitation and time limitation of the caches, and improving the processing efficiency of sending request information to multiple caches by multiple sending ends.
It should be noted that the technical details of the first embodiment and the second embodiment can be combined and are not listed here.
Before discussing exemplary embodiments in more detail, it should be noted that some exemplary embodiments are described as processes or methods depicted as flowcharts. Although a flowchart may describe the steps as a sequential process, many of the steps can be performed in parallel, concurrently, or simultaneously. In addition, the order of the steps may be rearranged. A process may be terminated when its operations are completed, but may have additional steps not included in the figure. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc.
Although the present invention has been described with reference to the preferred embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted for elements thereof without departing from the scope of the present invention.

Claims (9)

1. A multi-priority based cache distribution system, characterized in that,
the method comprises the following steps: a buffer memory and at least one state updater, wherein,
one end of the buffer memory is connected with M sending ends, the other end of the buffer memory is connected with N receiving ends, the buffer memory is used for caching request information sent by the M sending ends, the M sending ends correspond to P priorities, P is less than or equal to N, and each priority corresponds to at least one sending end;
the state updater is respectively connected with the buffer memory and the N receiving ends and is used for storing the buffer address information, the corresponding sending end priority information, the receiving end identification information and the current state information of each receiving end of each current buffer request information in the buffer memory, wherein the corresponding state information of the receiving ends is in a non-idle state in the process of processing the request information, and the corresponding state information of the receiving ends is in an idle state in the process of not processing the request information;
the buffer memory takes the highest priority of the current cache request information which can be sent out as the cache request information to be sent and determines the corresponding receiving end based on the information which is stored in the state updater at present, and sends the cache request information to be sent to the corresponding receiving end;
in the ith period:
the buffer memory is used for sending the cache request information to be sent determined in the (i-1) th period to a corresponding receiving end, and the corresponding receiving end is converted from an idle state to a non-idle state;
the buffer memory is also used for acquiring request information with the highest current priority from M sending ends and storing the request information into the buffer memory;
the state updater is used for acquiring cache address information, corresponding sending end priority information and receiving end identification information of the cache request information stored in the ith period from the cache memory and updating the cache address information, the corresponding sending end priority information and the receiving end identification information into the state updater;
the state updater is further configured to obtain current state information of the N receiving ends and update the current state information to the state updater;
and the state updater is further used for determining the cache request information to be sent and the corresponding receiving end in the (I + 1) th period based on the updated information in the ith period, wherein the value range of I is from 1 to I, and I is the total period number.
2. The system of claim 1,
the state updater is configured to perform the following steps:
s1, determining the identification information of a receiving end currently in an idle state based on the state information of the receiving end updated in the ith period;
s2, determining candidate cache request information based on the identification information of the receiving end currently in an idle state;
and S3, selecting one cache request message with the highest priority of the sending end from the candidate cache request messages as the cache request message to be sent in the (I + 1) th cycle, wherein the value range of I is from 1 to I, and I is the total cycle number.
3. The system of claim 2,
the step S2 includes:
step S211, determining all the cache request information corresponding to the receiving end identification information currently in the idle state in the state updater as candidate cache request information.
4. The system of claim 2,
the step S2 includes:
step S221, selecting a receiving terminal from the receiving terminal identification information in the idle state at present as a target receiving terminal;
step S222, determining the cache request information corresponding to the target receiving end in the state updater as candidate cache request information.
5. The system of claim 4,
in step S221, a receiving end is selected as a target receiving end from the receiving end identification information currently in the idle state by using a time slice polling algorithm.
6. The system of claim 2,
the step S3 includes:
step S311, determining whether the number of the cache request messages with the highest priority at the sending end in the candidate cache request messages is greater than 1, if so, executing step S312, and if so, executing step S313;
step S312, randomly selecting a cache request message with the highest priority of the sending end from the candidate cache request messages as the cache request message to be sent in the (i + 1) th cycle;
step S313, directly determining the cache request information with the highest priority at the sending end as the cache request information to be sent in the (i + 1) th cycle.
7. The system of claim 2,
the step S3 includes:
step S321, determining whether the number of the cache request messages with the highest priority at the sending end in the candidate cache request messages is greater than 1, if so, executing step S322, and if so, executing step S323;
step S322, determining the cache request information firstly stored in the cache memory in the candidate cache request information as the cache request information to be sent in the (i + 1) th cycle;
step S323, directly determining the cache request information with the highest priority at the sending end as the cache request information to be sent in the (i + 1) th cycle.
8. The system of claim 1 or 2,
the system comprises a state updater, wherein a state updating table is stored in the state updater, and the state updating table is used for storing cache address information of each cache request message in the cache memory, corresponding sending end priority information and receiving end identification information, and current state information of each receiving end is stored in the state updating table.
9. The system of claim 1 or 2,
the system comprises N state updaters, wherein each state updater corresponds to one receiving end, a state updating table is correspondingly stored in each state updater, and each state updating table is used for storing the current state information of the corresponding receiving end, the cache address information of the cache request information corresponding to the receiving end corresponding to the state updater in the current cache memory in the cache memory and the priority information of the corresponding sending end.
CN202211154581.5A 2022-09-22 2022-09-22 Cache distribution system based on multiple priorities Active CN115277596B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211154581.5A CN115277596B (en) 2022-09-22 2022-09-22 Cache distribution system based on multiple priorities

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211154581.5A CN115277596B (en) 2022-09-22 2022-09-22 Cache distribution system based on multiple priorities

Publications (2)

Publication Number Publication Date
CN115277596A CN115277596A (en) 2022-11-01
CN115277596B true CN115277596B (en) 2023-02-07

Family

ID=83757164

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211154581.5A Active CN115277596B (en) 2022-09-22 2022-09-22 Cache distribution system based on multiple priorities

Country Status (1)

Country Link
CN (1) CN115277596B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118069570B (en) * 2024-04-19 2024-07-30 沐曦集成电路(上海)有限公司 Doorbell type chip access system, device and method

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112039979A (en) * 2020-08-27 2020-12-04 中国平安财产保险股份有限公司 Distributed data cache management method, device, equipment and storage medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101146290A (en) * 2007-10-12 2008-03-19 中兴通讯股份有限公司 A system and method for multi-port AT command multiplexing processor
AT14694U1 (en) * 2015-08-19 2016-04-15 Knapp Ag Picking station for order picking of articles in order containers and conveyor pockets for order picking and batch picking
EP3324578B1 (en) * 2016-11-18 2020-01-01 Mercury Mission Systems International S.A. Safe network interface

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112039979A (en) * 2020-08-27 2020-12-04 中国平安财产保险股份有限公司 Distributed data cache management method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN115277596A (en) 2022-11-01

Similar Documents

Publication Publication Date Title
US7304942B1 (en) Methods and apparatus for maintaining statistic counters and updating a secondary counter storage via a queue for reducing or eliminating overflow of the counters
US10860493B2 (en) Method and apparatus for data storage system
US7599287B2 (en) Tokens in token buckets maintained among primary and secondary storages
US12099749B2 (en) Data read/write method and apparatus, and exchange chip and storage medium
US20120173774A1 (en) Storage-side storage request management
US7296112B1 (en) High bandwidth memory management using multi-bank DRAM devices
US20070002172A1 (en) Linking frame data by inserting qualifiers in control blocks
US8725873B1 (en) Multi-server round robin arbiter
US9020953B1 (en) Search table for data networking matching
US9870319B1 (en) Multibank queuing system
US20230291696A1 (en) Method and apparatus for managing buffering of data packet of network card, terminal and storage medium
EP2219114A1 (en) Method and apparatus for allocating storage addresses
CN115277596B (en) Cache distribution system based on multiple priorities
CN115037708B (en) Message processing method, system, device and computer readable storage medium
CN115242729B (en) Cache query system based on multiple priorities
US20200259766A1 (en) Packet processing
US7373467B2 (en) Storage device flow control
US7088731B2 (en) Memory management for packet switching device
CN116955247A (en) Cache descriptor management device and method, medium and chip thereof
CN112650449B (en) Method and system for releasing cache space, electronic device and storage medium
CN113438274A (en) Data transmission method and device, computer equipment and readable storage medium
US10003551B2 (en) Packet memory system, method and device for preventing underrun
CN114928577A (en) Workload proving chip and processing method thereof
US20020118693A1 (en) Storing frame modification information in a bank in memory
US10031884B2 (en) Storage apparatus and method for processing plurality of pieces of client data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant