[go: up one dir, main page]

CN118519729B - Message scheduling method, system, storage medium and electronic equipment - Google Patents

Message scheduling method, system, storage medium and electronic equipment Download PDF

Info

Publication number
CN118519729B
CN118519729B CN202410965963.9A CN202410965963A CN118519729B CN 118519729 B CN118519729 B CN 118519729B CN 202410965963 A CN202410965963 A CN 202410965963A CN 118519729 B CN118519729 B CN 118519729B
Authority
CN
China
Prior art keywords
message
queue
order
posted
preserving
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410965963.9A
Other languages
Chinese (zh)
Other versions
CN118519729A (en
Inventor
唐端午
何贵洲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guizhou Huaxin Semiconductor Technology Co ltd
Original Assignee
Guizhou Huaxin Semiconductor Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guizhou Huaxin Semiconductor Technology Co ltd filed Critical Guizhou Huaxin Semiconductor Technology Co ltd
Priority to CN202410965963.9A priority Critical patent/CN118519729B/en
Publication of CN118519729A publication Critical patent/CN118519729A/en
Application granted granted Critical
Publication of CN118519729B publication Critical patent/CN118519729B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/466Transaction processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/38Information transfer, e.g. on bus
    • G06F13/42Bus transfer protocol, e.g. handshake; Synchronisation
    • G06F13/4282Bus transfer protocol, e.g. handshake; Synchronisation on a serial bus, e.g. I2C bus, SPI bus
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • H04L49/901Buffering arrangements using storage descriptor, e.g. read or write pointers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • H04L49/9015Buffering arrangements for supporting a linked list
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2213/00Indexing scheme relating to interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F2213/0026PCI express

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The application provides a message scheduling method, a system, a storage medium and electronic equipment, and relates to the technical field of communication, wherein the method comprises the following steps: receiving a message to be scheduled, filling a message linked list, and determining that a pointer points to a queue; sequentially storing the messages to be scheduled into corresponding cache queues according to the message types; judging whether the message type of the first message to be scheduled has enough flow control credit or not; if not, adjusting the pointer to point to a queue or determining the next message to be scheduled, of which the message type is different from that of the first message to be scheduled, in the queue; if yes, dispatching the first message to be dispatched from the corresponding cache queue, and redirecting the pointer pointing queue in the message linked list according to the message order-preserving requirement. The application only needs to record the type of the message and the position of the message in the message linked list, does not need to record the time stamp of the message, has small additional resource expenditure of the whole device, and realizes the message order preservation with high efficiency and low expenditure.

Description

Message scheduling method, system, storage medium and electronic equipment
Technical Field
The present application relates to the field of communications technologies, and in particular, to a method and system for packet scheduling, a storage medium, and an electronic device.
Background
In PCIE protocol, the Transaction layer is configured to receive data requests from the PCIE device core layer and convert the data requests into PCIE bus transactions, where the bus transactions used by the PCIE bus are defined in TLP (Transaction LAYER PACKET) headers. All TLPs are classified into three types: namely Posted messages (abbreviated as P), non-Posted messages (abbreviated as NP) and Completion messages (abbreviated as CPL). The NP messages may be categorized into requests without data (Non-Posted Request, NPR) and requests with data (Non-Posted Request With Data, NPD).
In the prior art, a time stamp is adopted to execute message order preservation, but when the depth of a message queue is deeper, a large amount of resources are occupied to additionally store the time stamp information of the message. And when the order is kept, the time stamps of different types of messages need to be compared, and the time stamps depend on a very complex combinational logic circuit, so that more resources are utilized, and the time stamps become the bottleneck for improving the transmission performance of the messages.
Therefore, how to improve the message transmission performance is a technical problem that needs to be solved by those skilled in the art.
Disclosure of Invention
The application aims to provide a message scheduling method, a system, a computer readable storage medium and electronic equipment, which are used for getting rid of application dependence on message transmission time stamps by adopting a message linked list and improving communication performance.
In order to solve the technical problems, the application provides a message scheduling method, which comprises the following specific technical scheme:
Receiving a message to be scheduled, sequentially filling message information of the message to be scheduled into a message linked list according to a receiving sequence, and determining that a pointer of the message linked list points to a queue; the pointer pointing queue is used for indicating the next message of each message to be scheduled in the scheduling sequence;
Sequentially storing the messages to be scheduled into corresponding cache queues according to the message types; the message types comprise a flexible order-preserving Posted message, a strict order-preserving Posted message, a flexible order-preserving Non-Posted With Data message, a strict order-preserving Non-Posted message and a completion message;
Judging whether the message type of the first message to be scheduled in the message linked list has enough flow control credit or not;
if not, adjusting the pointer to point to a queue or determining the next message to be scheduled with the message type different from that of the first message to be scheduled in the queue, and returning to execute the step of judging whether the message type of the first message to be scheduled in the message linked list has enough flow control credit;
If yes, dispatching the first message to be dispatched in the message linked list from the corresponding cache queue, and redirecting the pointer to the queue according to the message order-preserving requirement.
Optionally, dispatching the first message to be dispatched in the message linked list from the corresponding buffer queue, and redirecting the pointer to the queue according to the message order-preserving requirement, further including:
If the first message to be scheduled is a flexible order-preserving Non-Posted message, when the message order-preserving requirement comprises a flexible order-preserving Posted message which must exceed the previous flexible order-preserving Non-Posted message, setting an NPD_RO buffer queue corresponding to the flexible order-preserving Non-Posted message and a P_RO buffer queue corresponding to the flexible order-preserving Posted message, and executing scheduling by adopting a first time slice rotation scheduler; and the NPD_RO buffer queue and the P_RO buffer queue have buffer queues in any blocking state, and the messages to be scheduled in the other buffer queue are first out until the blocking state is finished, and then the time slice round-robin scheduling is resumed.
Optionally, dispatching the first message to be dispatched in the message linked list from the corresponding buffer queue, and redirecting the pointer to the queue according to the message order-preserving requirement, further including:
when the message order-preserving requirement comprises a finishing message which can exceed a previous strict order-preserving Non-Posted message, setting a CPL buffer queue corresponding to the finishing message and an NPR/NPD_SO buffer queue corresponding to the strict order-preserving Non-Posted message to execute scheduling by adopting a second time slice rotation scheduler; and the CPL cache queue and the NPR/NPD_SO cache queue have any cache queue in a blocking state, and the message to be scheduled in the other cache queue is first out until the blocking state is finished, and then time slice round-robin scheduling is resumed.
Optionally, dispatching the first message to be dispatched in the message linked list from the corresponding buffer queue, and redirecting the pointer to the queue according to the message order-preserving requirement, further including:
When the message order-preserving requirement comprises a flexible order-preserving Posted message which must be capable of exceeding a previous flexible order-preserving Non-Posted message, a strict order-preserving Non-Posted message and a finishing message, and the flexible order-preserving Non-Posted Request message, the strict order-preserving Non-Posted With Data message and the finishing message are forbidden to exceed the previous flexible order-preserving Posted message and the strict order-preserving Posted message, a first message selector is adopted to control the output of a P_SO buffer queue corresponding to the second time slice round-robin scheduler and the strict order-preserving Posted message, and the output priority of the P_SO buffer queue is higher than the output priority of the second time slice round-robin scheduler.
Optionally, when the first message selector is used to control the output of the p_so buffer queue corresponding to the second time slice round robin scheduler and the strictly ordered Posted messages, the method further includes:
if the strictly ordered Posted message does not have enough flow control credit, outputting a message to be scheduled corresponding to the second time slice round-robin scheduler;
or when the first timer of the first message selector expires, confirming that no strict order-preserving Posted message exists before the message to be scheduled output by the second time slice round-robin scheduler is confirmed according to the pointer pointing queue of the message linked list, and outputting the message to be scheduled corresponding to the second time slice round-robin scheduler.
Optionally, dispatching the first message to be dispatched in the message linked list from the corresponding buffer queue, and redirecting the pointer to the queue according to the message order-preserving requirement, further including:
The message order-preserving requirements comprise a flexible order-preserving Posted message and a flexible order-preserving Non-Posted With Data message which can exceed a previous strict order-preserving Posted message, a strict order-preserving Non-Posted message and a finishing message, the strict order-preserving Posted message prohibits exceeding a previous strict order-preserving Posted message and a flexible order-preserving Posted message, the strict order-preserving Posted message permits exceeding a previous finishing message and a strict order-preserving Non-Posted message, a second message selector is adopted to control the output of the first time slice round scheduler and the first message selector, and the output priority of the first time slice round scheduler is higher than the output priority of the first message selector; and when the second timer of the second message selector expires, allowing the output of the message to be scheduled corresponding to the first message selector.
Optionally, before receiving the message to be scheduled and sequentially filling the message information of the message to be scheduled into the message linked list according to the receiving sequence, the method further includes:
creating a message Wen Lianbiao containing a message type and a next message pointer;
Forming the pointer pointing queue according to each next message pointer;
and determining the pointer width of the pointer pointing to the queue according to the depth of the message linked list.
The application also provides a message scheduling system, which comprises:
The message receiving module is used for receiving the message to be scheduled, sequentially filling the message information of the message to be scheduled into a message linked list according to the receiving sequence, and determining that the pointer of the message linked list points to a queue; the pointer pointing queue is used for indicating the next message of each message to be scheduled in the scheduling sequence;
The message buffer module is used for sequentially storing the messages to be scheduled into corresponding buffer queues according to the message types; the message types comprise a flexible order-preserving Posted message, a strict order-preserving Posted message, a flexible order-preserving Non-Posted With Data message, a strict order-preserving Non-Posted message and a completion message;
a message Wen Liukong judging module, configured to judge whether a message type of a first message to be scheduled in the message linked list has enough flow control credit;
The scheduling module is configured to adjust the pointer to point to a queue or determine a next message to be scheduled in the queue, which is different from the first message to be scheduled in type, and reenter the message Wen Liukong determination module when the determination result of the message Wen Liukong determination module is negative; and when the judging result of the judging module of the message Wen Liukong is yes, dispatching the first message to be dispatched in the message linked list from the corresponding cache queue, and redirecting the pointer to the queue according to the message order-preserving requirement.
The application also provides a computer readable storage medium having stored thereon a computer program which when executed by a processor implements the steps of the method as described above.
The application also provides an electronic device comprising a memory in which a computer program is stored and a processor which when calling the computer program in the memory implements the steps of the method as described above.
The application provides a message scheduling method, which comprises the following steps: receiving a message to be scheduled, sequentially filling message information of the message to be scheduled into a message linked list according to a receiving sequence, and determining that a pointer of the message linked list points to a queue; the pointer pointing queue is used for indicating the next message of each message to be scheduled in the scheduling sequence; sequentially storing the messages to be scheduled into corresponding cache queues according to the message types; the message types comprise a flexible order-preserving Posted message, a strict order-preserving Posted message, a flexible order-preserving Non-Posted With Data message, a strict order-preserving Non-Posted message and a completion message; judging whether the message type of the first message to be scheduled in the message linked list has enough flow control credit or not; if not, adjusting the pointer to point to a queue or determining the next message to be scheduled with the message type different from that of the first message to be scheduled in the queue, and returning to execute the step of judging whether the message type of the first message to be scheduled in the message linked list has enough flow control credit; if yes, dispatching the first message to be dispatched in the message linked list from the corresponding cache queue, and redirecting the pointer to the queue according to the message order-preserving requirement.
According to the message scheduling method provided by the application, when all messages enter the cache, the message receiving is recorded through the message linked list. For messages with specific transaction sequence requirements, judging whether other messages can be exceeded according to the sequence of the messages in a message linked list. Because only the types of messages and the positions of the messages in a message chain table are required to be recorded, the time stamp of the messages is not required to be recorded, the overhead of the whole device is small, and meanwhile, because the special buffer queues are respectively arranged for Posted messages and NPD messages with the Relaxed Order of 1, the flexible Order preservation of the Relaxed Order messages in the PCIE protocol is supported, and the message Order preservation with high efficiency and low overhead and the high-performance message scheduling are realized.
The application also provides a message scheduling system, a computer readable storage medium and an electronic device, which have the beneficial effects and are not repeated here.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present application, and that other drawings can be obtained according to the provided drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a message scheduling method according to an embodiment of the present application;
FIG. 2 is a schematic diagram illustrating an adjustment process of a pointer pointing queue according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a message sequence preserving process according to an embodiment of the present application;
Fig. 4 is a schematic structural diagram of a message scheduling system according to an embodiment of the present application;
fig. 5 is a block diagram of an electronic device according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present application more apparent, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments of the present application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
When various messages are transmitted in a PCIE (PERIPHERAL COMPONENT INTERCONNECT EXPRESS, high-speed serial computer expansion bus standard) system, in order to meet a producer/consumer model, to ensure performance as much as possible and avoid occurrence of deadlock, a PCIE protocol defines a set of order-preserving rules that must be complied with by a transaction layer. As shown in table 1:
TABLE 1 PCIE transaction order requirement
In Table 1, "Yes" means that a transaction of a row must be able to exceed a transaction of a column arranged in front of it; "No" means that a transaction of a row must not exceed a transaction of a column preceding it; "No requirement" means that there is no particular requirement for the order between the transactions of a row and the transactions of a column.
Wherein a row 2 column condition a) means that Posted messages cannot exceed Posted messages preceding it unless condition b) is triggered. Condition b) refers to that the Order of the P message with the related Order field of 1 and the messages arranged before it is not required (i.e. the exceeding is allowed or not allowed, but the exceeding is allowed possibly to be helpful for improving the performance);
A row 5 column condition a) refers to a Posted message and no specific order requirement between the completion message that precedes it unless condition b) occurs. Condition b) means that when there is a PCIE to PCI/PCIX bridge in the system, posted messages must be allowed to exceed the completion messages preceding it;
C line 2 column condition a) means that an NPD message is not allowed to exceed Posted messages preceding it unless condition b) occurs; condition b) means that NPD messages with a range of 1 in the related Order field allow more than Posted messages arranged in front of it;
D row 2 column condition a) means that a complete message is not allowed to exceed the Posted messages preceding it unless condition b) occurs; whereas condition b) refers to that a complete message with a related Order field of 1 allows for more than Posted messages to be placed before it (which is advantageous for improving system performance).
Referring to fig. 1, fig. 1 is a flowchart of a message scheduling method according to an embodiment of the present application, where the method includes:
S101: receiving a message to be scheduled, sequentially filling message information of the message to be scheduled into a message linked list according to a receiving sequence, and determining that a pointer of the message linked list points to a queue; the pointer pointing queue is used for indicating the next message of each message to be scheduled in the scheduling sequence;
S102: sequentially storing the messages to be scheduled into corresponding cache queues according to the message types; the message types comprise a flexible order-preserving Posted message, a strict order-preserving Posted message, a flexible order-preserving Non-Posted With Data message, a strict order-preserving Non-Posted message and a completion message;
s103: judging whether the message type of the first message to be scheduled in the message linked list has enough flow control credit or not; if not, entering S104; if yes, go to S105;
s104: adjusting the pointer to a queue or determining the next message to be scheduled, which is different from the message type of the first message to be scheduled, in the queue, and returning to execute the step of judging whether the message type of the first message to be scheduled in the message linked list has enough flow control credit;
s105: and dispatching the first message to be dispatched in the message linked list from the corresponding cache queue, and redirecting the pointer to the queue according to the message order-preserving requirement.
The application fills the message linked list according to the sequence when receiving the TLP message. The message link list is filled with the message information of the message to be scheduled only, and the message to be scheduled needs to be cached in a cache queue. The default of this embodiment is that the message linked list is already acquired or generated before the message to be scheduled is received. The message linked list mainly comprises message types and pointers to queues. The packet type determines whether it can perform the scheduling, and the pointer pointing to the queue contains the next packet of each packet to be scheduled in the scheduling order, so it can be seen that the scheduling order in the pointer pointing to the queue in the initial state is the receiving order of each TLP.
The embodiment is not limited to how to generate the message linked list, and the following is a feasible method for generating the message linked list:
the first step, create the report Wen Lianbiao containing the message type and the next message pointer;
Secondly, forming the pointer pointing queue according to each next message pointer;
And thirdly, determining the pointer width of the pointer pointing to the queue according to the depth of the message linked list.
In the generation process of the message linked list, the pointer points to the queue to be used as a component part of the message linked list. In other embodiments of the present application, the pointer to the queue may exist independently of the linked list of messages. Since the packet generally adopts FIFO (FIRST IN FIRST Out ) queues, if only one buffer queue is used to receive the TLP, even if the order adjustment occurs, the packet order-preserving requirement cannot be met due to the limitation of the FIFO queue. Therefore, the application constructs a corresponding buffer queue for each packet type, and buffers the corresponding buffer queue according to the corresponding type of the TLP when receiving the TLP. The message types in this embodiment include flexible order-preserving Posted message, strict order-preserving Posted message, flexible order-preserving Non-Posted With Data message, strict order-preserving Non-Posted message and completion message. Wherein the completion message is also referred to as a CPL message. Correspondingly, the P_RO queue refers to a Posted message queue with a Relaxed Order domain segment of 1; the NPD_RO queue refers to a message queue of Non-Posted Request With Data with a Relaxed Order domain segment of 1; the P_SO queue refers to Posted message queues with a Relaxed Order domain segment of 0; CPL queue refers to Comletion message queue; the NPR/NPD-SO queues refer to Non-Posted Request (without data) and Non-Posted Request With Data (with data) queues with a Relaxed Order field of 0.
On the PCIe bus, efficiency and sequence control of data transfers are two extremely important aspects. To optimize these two aspects, PCIe introduces a variety of transaction ordering rules, including Strong Ordering and Relaxed Ordering, strong Ordering forcing a make-before-make, while Relaxed Ordering allows certain types of TLPs to be transported over other TLPs under certain conditions. If Relaxed Ordering bits in the Attr field of the message are set to1, i.e. the related Order field is 1, this means that the message can be sequenced. The method mainly comprises a P_RO message and an NPD_RO message. A0 for the forwarded Order field indicates that the previous message transmission cannot be exceeded.
When the message scheduling is performed, the message state of the opposite terminal equipment needs to be considered. In order to solve the congestion problem possibly occurring in the data transmission process, the data transmission rate of the transmitting end is reasonably adjusted, so that the data packet is prevented from being lost, and the efficiency and the stability of the network are improved. The message has enough flow control credit to indicate that the message has enough credit line in the sending process to ensure that the message is correctly processed by the receiving end, thereby avoiding data loss or retransmission caused by insufficient buffer area of the receiving end. If a certain type of message does not have enough flow control credit, the opposite terminal equipment is not used for receiving the type of message, namely the sending terminal is limited to send the type of message. If a certain type of report has enough flow control credit, the report can be normally sent to the opposite terminal equipment.
If the message type corresponding to the first message to be scheduled does not have enough flow control credit, the head position of the message linked list can be adjusted at the moment, namely the first message to be scheduled is not used as the head of the message linked list. The process does not need to execute the processes of data migration and the like in the message linked list, and only needs to change head data and the pointer to point to the queue. It is easy to understand that if the continuous messages after the first message to be scheduled are all of the same message type, the next message to be scheduled of different message types should be used as the header. For example, before adjusting the head position of the message linked list, the message linked list is set to include A, A, B, C messages, where the first two messages a represent messages belonging to the same type, and if a does not have enough flow control credit, the message a of the head position can be adjusted at this time, and the message linked list is changed to B, A, A, C. If the type a message has enough flow control credit after the type B message is sent, the actual sending sequence is b→a→a→c, or the type a message does not have enough flow control credit after the first type a message is sent, and the actual sending sequence is b→a→c→a. Of course, if the type a message does not have enough flow control credit for a long time, the corresponding sending sequence may be b→c→a→a.
In addition, the message linked list is not adjusted, the pointer is determined to point to the message to be scheduled, which is different from the message type of the first message to be scheduled, in the queue, and whether the message to be scheduled has enough flow control credit is judged, if so, the message to be scheduled can be sent out.
And after the pointer points to the queue for updating, re-executing the judgment of whether the message type of the first message to be scheduled has enough flow control credit or not until the message type of the first message to be scheduled has enough flow control credit, and adjusting the pointer to the queue according to the message order-preserving requirement.
The following describes a process of adjusting the pointer to the queue, referring to fig. 2, fig. 2 is a schematic diagram of the process of adjusting the pointer to the queue according to an embodiment of the present application. In the following embodiments, on the basis of pointing the pointer to the queue, a corresponding message pointer queue is constructed for each type of message, so as to indicate the position information of the type of message in the whole message linked list, and the scheduling sequence of each message can be reflected by the sequence of the position information in the message pointer queue.
When the pointer is adjusted to point to the queue, configuring a message pointer queue of each type of received message to be scheduled; the message pointer queue is used for recording the position information of the message in the message linked list, and the element sequence of the message pointer queue is used for indicating the scheduling sequence of the message;
if the message is scheduled or the message is newly added according to the message order-preserving requirement, the message pointer queue of the message type corresponding to the message is updated, and the pointer is updated to point to the queue. The pointer pointing queue contains a head pointer, a message pointer, and a tail pointer. The head pointer is used for indicating a first message to be scheduled, the tail pointer is used for indicating a last message to be scheduled, and the message pointer is used for indicating a later message of each message to be scheduled except the last message to be scheduled.
The scheme of the order-preserving linked list is illustrated by Posted messages (hereinafter abbreviated as P or P messages) and NPs (i.e., NP requests without data). Assuming that the depth of the packet linked list is 5, 5 TLPs are received, and the time sequence of the TLPs is NP, P, NP, respectively, referring to the element state and the pointer in the packet linked list in the initial state row in fig. 2 to point to the queue. The head pointer of the linked list=0, the tail pointer of the linked list tail=4, the NP pointer queue is 0,3 and 4, and pointers of NP messages in the linked list are recorded. The P pointer queue is 1,2, and records the position of the P message in the linked list. The available pointer queue is empty because the queue is already full. The pointer now points to the queue as indicated by the directional arrow under each element.
If the scheduling of the P messages is allowed, the P message arranged at the forefront of the linked list is scheduled out, and the P pointer queue becomes 2 and the pointer queue 1 is available. The NP pointer queue remains unchanged and the next pointer of the NP message queued at the head of the linked list is updated from 1 to 2. The pointer now points to the queue as indicated by the directional arrow under each element.
If a new NP message is received in the queue, because the current available pointer queue is 1, the new NP message is put into buffer with pointer 1, NP pointer queue is updated to 0,3,4,1, available pointer queue is empty, and P pointer queue is unchanged. The new NP message is put at the tail of the linked list, so that the later pointer of the buffer at the tail pointer position is updated to be 1, and the tail pointer is also updated to be 1; the pointer now points to the queue as indicated by the directional arrow under each element.
When the scheduler allows scheduling NP messages, since one NP message is arranged at the head of the linked list, the NP message can be scheduled out, the NP pointer queue is updated to be 3,4 and 1, the available pointer queue is 0, the P pointer is unchanged, and since the NP at the head of the linked list is scheduled out and the P message must exceed the NP message, the head is updated to be 2, the tail pointer is kept unchanged and is still 1; the pointer now points to the queue as indicated by the directional arrow under each element.
If the scheduler is still to schedule NP operations (where P messages may not be sent out due to insufficient flow control credits), but since the linked list is P messages at the head of the list, NP operations cannot be scheduled out because there is an earlier P operation without dequeuing, since the protocol does not allow NP messages to exceed the previous P messages. The pointer now points to the queue as indicated by the directional arrow under each element.
Therefore, the implementation process further sets a special pointer queue for each type of message, and when new addition or deletion of the linked list elements is needed, the positions of the elements can be quickly found in the linked list, so that the order of storing the messages based on the linked list data structure is realized.
According to the message scheduling method provided by the embodiment of the application, when all messages enter the cache, the message receiving is recorded through the message linked list. For messages with specific transaction sequence requirements, judging whether other messages can be exceeded according to the sequence of the messages in a message linked list. Because only the types of messages and the positions of the messages in a message chain table are required to be recorded, the time stamp of the messages is not required to be recorded, the overhead of the whole device is small, and meanwhile, because the special buffer queues are respectively arranged for Posted messages and NPD messages with the Relaxed Order of 1, the flexible Order preservation of the Relaxed Order messages in the PCIE protocol is supported, and the message Order preservation with high efficiency and low overhead is realized.
The following describes the order of the messages. Referring to fig. 3, fig. 3 is a schematic diagram of a message order preserving process according to an embodiment of the present application, where the message order preserving process may refer to table 1 above, and the specific process may be performed as described below:
After the TLPs are input, each TLP is stored in a corresponding buffer queue according to the packet type, that is, a p_ro buffer queue, an npd_ro buffer queue, a p_so buffer queue, a CPL queue, and an NPR/npd_so buffer queue shown in fig. 3. And simultaneously writing the message information into a message linked list control module at the lower part of the figure 3 to execute message order-preserving detection. After the TLP is input, an arrow for redirecting to the linked list control is included in fig. 3, which indicates that the packet order control flow of the next TLP is performed again.
If the first message to be scheduled is a flexible order-preserving Non-Posted message, when the message order-preserving requirement comprises a flexible order-preserving Posted message which must exceed the previous flexible order-preserving Non-Posted message, setting an NPD_RO buffer queue corresponding to the flexible order-preserving Non-Posted message and a P_RO buffer queue corresponding to the flexible order-preserving Posted message, and executing scheduling by adopting a first time slice rotation scheduler; and the NPD_RO buffer queue and the P_RO buffer queue have any buffer queue in a blocking state, and the messages to be scheduled in the other buffer queue are first out until the blocking state is finished, and the time slice round robin scheduling is recovered.
It should be noted that, the message order requirement includes that the flexible order Posted message must be able to exceed the preceding flexible order Non-Posted message, which means that the flexible order Posted message must be able to exceed the preceding flexible order Non-Posted message, but not necessarily. When any one of the two buffer queues, namely the NPD_RO buffer queue and the P_RO buffer queue is blocked, the message to be scheduled in the other buffer queue is sent out in advance until the blocking is finished, and then the first time slice round-robin scheduler is continuously adopted to execute scheduling.
According to PCIE packet order preservation requirements, the p_ro packet in the p_ro buffer queue (i.e., the above flexible order preservation Posted packet) must be able to exceed the npd_ro packet in the npd_ro buffer queue arranged in front (i.e., the above flexible order preservation Non-Posted packet), while the npd_ro packet may exceed the p_ro packet or may not exceed the p_ro packet, so the p_ro queue and the npd_ro queue adopt RR scheduling (Round Robin scheduling algorithm), and in case of sufficient flow control credits, dequeuing is performed randomly. The first slot-and-wheel scheduler applies RR scheduling, which achieves this by allocating a CPU time unit called "slot" to each process. If a process does not complete execution within its time slice, it is placed back at the end of the ready queue waiting for the next dispatch. This mechanism ensures that all processes progress and starvation is avoided.
When the message order-preserving requirement comprises that the finished message can exceed the previous strict order-preserving Non-Posted message, setting a CPL buffer queue corresponding to the finished message and an NPR/NPD_SO buffer queue corresponding to the strict order-preserving Non-Posted message, and executing scheduling by adopting a second time slice rotation scheduler; and the CPL buffer queue and the NPR/NPD_SO buffer queue have buffer queues in any blocking state, and the messages to be scheduled in the other buffer queue are first output until the blocking state is finished, and then the time slice round-robin scheduling is resumed. The second time slice round-robin scheduler and the first time slice round-robin scheduler play the same role, and in order to distinguish application scenarios, expressions such as "first" and "second" are adopted in this embodiment.
If the completion message must exceed the previous NP message according to the PCIE message order-preserving requirement, and the NP message and the previous completion message have no order requirement, SO the CPL queue and the NPR/npd_so queue may also use the second time slice round robin scheduler to execute RR scheduling, and under the condition that the flow control credit is satisfied, the queue is dequeued randomly, if one queue is blocked due to the lack of flow control credit, the queue cannot be dequeued, and the other queue is dequeued first.
When the message order-preserving requirement comprises a flexible order-preserving Posted message which can exceed a preceding flexible order-preserving Non-Posted message, a strict order-preserving Non-Posted message and a finishing message, and the flexible order-preserving Non-Posted Request message, the strict order-preserving Non-Posted With Data message and the finishing message are forbidden to exceed the preceding flexible order-preserving Posted message and the strict order-preserving Posted message, a first message selector is adopted to control the output of a P_SO buffer queue corresponding to the second time slice round-robin scheduler and the strict order-preserving Posted message, and the output priority of the P_SO buffer queue is higher than the output priority of the second time slice round-robin scheduler.
According to the PCIE packet order preservation requirement, posted packets must be able to exceed the preceding NP and finish packets, but no-Posted and finish packets are not allowed to exceed the preceding Posted packets, SO the output of the second slot cycle scheduler and the output of the p_so queue use a packet selector. Typically, the message output of the p_so queue is selected.
On the basis, in order to avoid serious starvation phenomenon and even starvation death of the queue of the Non-Posted message and the CPL queue, the embodiment proposes that the Non-Posted message can be selected or the message output can be completed under the following two conditions:
in the first case, if the strictly ordered Posted message does not have enough flow control credit, outputting a message to be scheduled corresponding to the second time slice round-robin scheduler;
in the second case, when the first timer of the first message selector expires, and before confirming that the message to be scheduled output by the second time slice round-robin scheduler does not exist according to the pointer pointing queue of the message linked list, a message with strict order preservation Posted is output, and the message to be scheduled corresponding to the second time slice round-robin scheduler is output.
When Posted messages do not have enough flow control, the messages to be scheduled corresponding to the second time slice rotation scheduler can be directly output.
At the same time, a first timer can be set for the first message selector, and the first timer is allowed to select the message output of the second time slice rotation scheduler when the first timer expires. It will be readily appreciated that the expiration time of the first timer may be self-configurable by software. And the first timer resets after expiration.
However, since none-Posted and completion messages are not allowed to exceed the previous Posted messages, when the second time slice round-robin scheduler is selected, whether the Posted message arranged in front of the second time slice round-robin scheduler is not sent yet is checked, if yes, the second time slice round-robin scheduler cannot output the message, otherwise, the second time slice round-robin scheduler can output the message. Checking whether there are no-Posted/no Posted messages that have not been sent before the completion of the message can be implemented by a message linked list.
The message linked list is ordered according to various message sequences, the head enters the head earliest, and the Tail enters the Tail latest. Each element of the message linked list is a message type and a pointer NEXT of a NEXT message (the width of the pointer is selected according to the depth of the queue), meanwhile, each message of each type is provided with a pointer queue for recording the position of each message in the linked list, each time a message is sent out, the element corresponding to the message needs to be deleted from the linked list (the position of the deleted element can be obtained from the pointer queue of the corresponding message), when a new message enters, a new element needs to be added in the linked list (the pointer of the new element is obtained from the available pointer queue), and the pointer pointing queue is updated.
The message order-preserving requirements comprise a flexible order-preserving Posted message and a flexible order-preserving Non-Posted With Data message which can exceed a previous strict order-preserving Posted message, a strict order-preserving Non-Posted message and a finishing message, the strict order-preserving Posted message prohibits exceeding a previous strict order-preserving Posted message and a flexible order-preserving Posted message, the strict order-preserving Posted message permits exceeding a previous finishing message and a strict order-preserving Non-Posted message, a second message selector is adopted to control the output of the first time slice rotary scheduler and the first message selector, and the output priority of the first time slice rotary scheduler is higher than that of the first message selector; and when the second timer of the second message selector expires, allowing the output of the message to be scheduled corresponding to the first message selector.
Based on this, the following two kinds of information can be output according to the new addition and deletion of the elements in the current message linked list:
whether Posted messages arranged in front of the NP/CPL are not sent out;
Whether Posted messages arranged before the P_SO message exist or not is not sent;
According to PCIE protocol, p_ro and npd_ro messages may exceed p_so/CPL/NP messages that precede it, whereas p_so messages are not allowed to exceed Posted messages that precede it (whether p_so or p_ro), whereas p_so messages may exceed NP and done messages that precede it, SO the second message selector will typically select the message output of RR scheduler 1, in order to avoid starvation of the channel where message selector 1 is located, the output of the first message selector may be selected in both cases below:
the message flow control output by the first time slice rotation scheduler is insufficient; or the second timer expires allowing the message output of the second message selector to be selected at intervals.
It can be seen that, when the output of the second message selector is selected, whether the current message can be output or not can also be judged according to the state of the message order-preserving control module:
First case: if the second message selector outputs NP messages, judging whether the P messages arranged in front of the NP messages are not output, if so, the second message selector cannot output the messages, otherwise, outputting the messages;
Second case: if the second message selector outputs a completion message, judging whether the related Order field of the second message selector is 1 or not; if the value is 1, the output can be directly carried out; if the message is 0, whether the P message arranged in front of the message is output or not still needs to be judged, if yes, the message of the second message selector cannot be output, otherwise, the message can be output;
third case: if the second message selector outputs the P_SO message, judging whether there is the P_RO message arranged in front of the second message selector, if SO, the message of the second message selector cannot be output, otherwise, the message can be output.
In this embodiment, for the messages without order-preserving requirements, the RR scheduling algorithm is used to implement scheduling, and for the messages with specific transaction order requirements, whether the messages can exceed each other is determined according to the order of the messages in the linked list. The type of the messages and the positions of the messages in the linked list are only required to be recorded, so that the overhead of the whole device is small, and meanwhile, the flexible Order preservation of the messages in the PCIE protocol is supported because special cache queues are respectively arranged for Posted messages and Non-Posted With Data messages with the Relaxed Order of 1.
Referring to fig. 4, fig. 4 is a schematic structural diagram of a message scheduling system provided by an embodiment of the present application, and the present application further provides a message scheduling system, including:
The message receiving module is used for receiving the message to be scheduled, sequentially filling the message information of the message to be scheduled into a message linked list according to the receiving sequence, and determining that the pointer of the message linked list points to a queue; the pointer pointing queue is used for indicating the next message of each message to be scheduled in the scheduling sequence;
The message buffer module is used for sequentially storing the messages to be scheduled into corresponding buffer queues according to the message types; the message types comprise a flexible order-preserving Posted message, a strict order-preserving Posted message, a flexible order-preserving Non-Posted With Data message, a strict order-preserving Non-Posted message and a completion message;
a message Wen Liukong judging module, configured to judge whether a message type of a first message to be scheduled in the message linked list has enough flow control credit;
The scheduling module is configured to adjust the pointer to point to a queue or determine a next message to be scheduled in the queue, which is different from the first message to be scheduled in type, and reenter the message Wen Liukong determination module when the determination result of the message Wen Liukong determination module is negative; and when the judging result of the judging module of the message Wen Liukong is yes, dispatching the first message to be dispatched in the message linked list from the corresponding cache queue, and redirecting the pointer to the queue according to the message order-preserving requirement.
Based on the above embodiment, as a preferred embodiment, further comprising:
The first message order-preserving module is used for setting an NPD_RO buffer queue corresponding to the flexible order-preserving Non-Posted message and a P_RO buffer queue corresponding to the flexible order-preserving Posted message to execute scheduling by adopting a first time slice rotation scheduler when the message order-preserving requirement comprises a flexible order-preserving Posted message which can exceed the previous flexible order-preserving Non-Posted message if the first message to be scheduled is the flexible order-preserving Non-Posted message; and the NPD_RO buffer queue and the P_RO buffer queue have buffer queues in any blocking state, and the messages to be scheduled in the other buffer queue are first out until the blocking state is finished, and then the time slice round-robin scheduling is resumed.
Based on the above embodiment, as a preferred embodiment, further comprising:
The second message order-preserving module is used for setting a CPL cache queue corresponding to the completion message and an NPR/NPD_SO cache queue corresponding to the strict order-preserving Non-Posted message to execute scheduling by adopting a second time slice round-robin scheduler when the message order-preserving requirement comprises that the completion message can exceed the previous strict order-preserving Non-Posted message; and the CPL cache queue and the NPR/NPD_SO cache queue have any cache queue in a blocking state, and the message to be scheduled in the other cache queue is first out until the blocking state is finished, and then time slice round-robin scheduling is resumed.
Based on the above embodiment, as a preferred embodiment, further comprising:
And the third message order-preserving module is used for controlling the output of the P_SO buffer queue corresponding to the second time slice round-robin scheduler and the strict order-preserving Posted message by adopting a first message selector when the message order-preserving requirement comprises the flexible order-preserving No-Posted message, the strict order-preserving No-Posted message and the finishing message, and the flexible order-preserving No-Posted Request message, the strict order-preserving No-Posted With Data message and the finishing message are forbidden to exceed the flexible order-preserving Posted message and the strict order-preserving Posted message before, and the output priority of the P_SO buffer queue is higher than the output priority of the second time slice round-robin scheduler.
Based on the foregoing embodiment, as a preferred embodiment, the third packet order keeping module further includes:
A message starvation control unit, configured to output a message to be scheduled corresponding to the second time slice round robin scheduler if the strictly ordered Posted message does not have enough flow control credit; or when the first timer of the first message selector expires, confirming that no strict order-preserving Posted message exists before the message to be scheduled output by the second time slice round-robin scheduler is confirmed according to the pointer pointing queue of the message linked list, and outputting the message to be scheduled corresponding to the second time slice round-robin scheduler.
Based on the above embodiment, as a preferred embodiment, further comprising:
A fourth message order-preserving module, configured to, when the message order-preserving requirement includes a flexible order-preserving Posted message and a flexible order-preserving Non-Posted With Data message, enable the message to exceed a previous strict order-preserving Posted message, a strict order-preserving Non-Posted message and a completion message, disable the strict order-preserving Posted message to exceed a previous strict order-preserving Posted message and a flexible order-preserving Posted message, enable the strict order-preserving Posted message to exceed a previous completion message and a strict order-preserving Non-Posted message, control the output of the first time slice rotation scheduler and the first message selector by adopting a second message selector, and enable the output priority of the first time slice rotation scheduler to be higher than the output priority of the first message selector; and when the second timer of the second message selector expires, allowing the output of the message to be scheduled corresponding to the first message selector.
Based on the above embodiment, as a preferred embodiment, further comprising:
The pointer points to a queue generating module for creating a message Wen Lianbiao containing a message type and a next message pointer; forming the pointer pointing queue according to each next message pointer; and determining the pointer width of the pointer pointing to the queue according to the depth of the message linked list.
The present application also provides a computer-readable storage medium having stored thereon a computer program which, when executed, performs the steps provided by the above-described embodiments. The storage medium may include: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The present application also provides an electronic device, referring to fig. 5, and as shown in fig. 5, a block diagram of an electronic device provided in an embodiment of the present application may include a processor 1410 and a memory 1420.
Processor 1410 may include one or more processing cores, such as a 4-core processor, an 8-core processor, etc., among others. The processor 1410 may be implemented in at least one hardware form of DSP (DIGITAL SIGNAL Processing), FPGA (Field-Programmable gate array), PLA (Programmable Logic Array ). Processor 1410 may also include a main processor, which is a processor for processing data in an awake state, also called a CPU (Central Processing Unit ), and a coprocessor; a coprocessor is a low-power processor for processing data in a standby state. In some embodiments, the processor 1410 may integrate a GPU (Graphics Processing Unit, image processor) for rendering and drawing of content required to be displayed by the display screen. In some embodiments, the processor 1410 may also include an AI (ARTIFICIAL INTELLIGENCE ) processor for processing computing operations related to machine learning.
Memory 1420 may include one or more computer-readable storage media, which may be non-transitory. Memory 1420 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In this embodiment, the memory 1420 is used at least to store a computer program 1421, which, when loaded and executed by the processor 1410, can implement relevant steps in the method performed by the electronic device side as disclosed in any of the foregoing embodiments. In addition, the resources stored by memory 1420 may include an operating system 1422, data 1423, and the like, and the storage may be transient storage or permanent storage. Operating system 1422 may include Windows, linux, android, among other things.
In some embodiments, the electronic device may further include a display 1430, an input-output interface 1440, a communication interface 1450, a sensor 1460, a power supply 1470, and a communication bus 1480.
Of course, the structure of the electronic device shown in fig. 5 is not limited to the electronic device in the embodiment of the present application, and the electronic device may include more or fewer components than those shown in fig. 5 or may combine some components in practical applications.
In the description, each embodiment is described in a progressive manner, and each embodiment is mainly described by the differences from other embodiments, so that the same similar parts among the embodiments are mutually referred. The system provided by the embodiment is relatively simple to describe as it corresponds to the method provided by the embodiment, and the relevant points are referred to in the description of the method section.
The principles and embodiments of the present application have been described herein with reference to specific examples, the description of which is intended only to facilitate an understanding of the method of the present application and its core ideas. It should be noted that it will be apparent to those skilled in the art that the present application may be modified and practiced without departing from the spirit of the present application.
It should also be noted that in this specification, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.

Claims (10)

1. A method for scheduling messages, comprising:
Receiving a message to be scheduled, sequentially filling message information of the message to be scheduled into a message linked list according to a receiving sequence, and determining that a pointer of the message linked list points to a queue; the pointer pointing queue is used for indicating the next message of each message to be scheduled in the scheduling sequence;
Sequentially storing the messages to be scheduled into corresponding cache queues according to the message types; the P_RO queue refers to Posted message queues with a Relaxed Order domain segment of 1; the NPD_RO queue refers to a message queue of Non-Posted With Data with a Relaxed Order domain segment of 1; the P_SO queue refers to Posted message queues with a Relaxed Order domain segment of 0; CPL queue refers to Comletion message queue; the NPR/NPD-SO queue refers to a Non-Posted With Data queue with a Non-Posted Request and a Relaxed Order domain segment of 0; wherein, a 0 Relaxed Order field indicates that the prior message transmission can not be exceeded, and a1 Relaxed Order field indicates that the message can be sequenced;
Judging whether the message type of the first message to be scheduled in the message linked list has enough flow control credit or not;
if not, adjusting the pointer to point to a queue or determining the next message to be scheduled with the message type different from that of the first message to be scheduled in the queue, and returning to execute the step of judging whether the message type of the first message to be scheduled in the message linked list has enough flow control credit;
If yes, dispatching the first message to be dispatched in the message linked list from the corresponding cache queue, and redirecting the pointer to the queue according to the message order-preserving requirement.
2. The method for scheduling messages according to claim 1, wherein the scheduling the first message to be scheduled in the message linked list from the corresponding buffer queue, when redirecting the pointer to the queue according to the message order-preserving requirement, further comprises:
If the first message to be scheduled is a flexible order-preserving Non-Posted message, when the message order-preserving requirement comprises a flexible order-preserving Posted message which must exceed the previous flexible order-preserving Non-Posted message, setting an NPD_RO buffer queue corresponding to the flexible order-preserving Non-Posted message and a P_RO buffer queue corresponding to the flexible order-preserving Posted message, and executing scheduling by adopting a first time slice rotation scheduler; and the NPD_RO buffer queue and the P_RO buffer queue have buffer queues in any blocking state, and the messages to be scheduled in the other buffer queue are first out until the blocking state is finished, and then the time slice round-robin scheduling is resumed.
3. The method for scheduling messages according to claim 2, wherein the scheduling the first message to be scheduled in the message linked list from the corresponding buffer queue, when redirecting the pointer to the queue according to the message order-preserving requirement, further comprises:
when the message order-preserving requirement comprises a finishing message which can exceed a previous strict order-preserving Non-Posted message, setting a CPL buffer queue corresponding to the finishing message and an NPR/NPD_SO buffer queue corresponding to the strict order-preserving Non-Posted message to execute scheduling by adopting a second time slice rotation scheduler; and the CPL cache queue and the NPR/NPD_SO cache queue have any cache queue in a blocking state, and the message to be scheduled in the other cache queue is first out until the blocking state is finished, and then time slice round-robin scheduling is resumed.
4. The method for scheduling messages according to claim 3, wherein the first message to be scheduled in the message linked list is scheduled to be sent from the corresponding buffer queue, and when the pointer is pointed to the queue for redirecting according to the message order-preserving requirement, the method further comprises:
When the message order-preserving requirement comprises a flexible order-preserving Posted message which must be capable of exceeding a previous flexible order-preserving Non-Posted message, a strict order-preserving Non-Posted message and a finishing message, and the flexible order-preserving Non-Posted Request message, the strict order-preserving Non-Posted With Data message and the finishing message are forbidden to exceed the previous flexible order-preserving Posted message and the strict order-preserving Posted message, a first message selector is adopted to control the output of a P_SO buffer queue corresponding to the second time slice round-robin scheduler and the strict order-preserving Posted message, and the output priority of the P_SO buffer queue is higher than the output priority of the second time slice round-robin scheduler.
5. The method for scheduling packets according to claim 4, wherein when the first packet selector is used to control the output of the p_so buffer queue corresponding to the second time slice round robin scheduler and the strictly ordered Posted packets, the method further comprises:
if the strictly ordered Posted message does not have enough flow control credit, outputting a message to be scheduled corresponding to the second time slice round-robin scheduler;
or when the first timer of the first message selector expires, confirming that no strict order-preserving Posted message exists before the message to be scheduled output by the second time slice round-robin scheduler is confirmed according to the pointer pointing queue of the message linked list, and outputting the message to be scheduled corresponding to the second time slice round-robin scheduler.
6. The method for scheduling messages according to claim 5, wherein the scheduling the first message to be scheduled in the message linked list from the corresponding buffer queue, when redirecting the pointer to the queue according to the message order-preserving requirement, further comprises:
The message order-preserving requirements comprise a flexible order-preserving Posted message and a flexible order-preserving Non-Posted With Data message which can exceed a previous strict order-preserving Posted message, a strict order-preserving Non-Posted message and a finishing message, the strict order-preserving Posted message prohibits exceeding a previous strict order-preserving Posted message and a flexible order-preserving Posted message, the strict order-preserving Posted message permits exceeding a previous finishing message and a strict order-preserving Non-Posted message, a second message selector is adopted to control the output of the first time slice round scheduler and the first message selector, and the output priority of the first time slice round scheduler is higher than the output priority of the first message selector; and when the second timer of the second message selector expires, allowing the output of the message to be scheduled corresponding to the first message selector.
7. The method for scheduling messages according to any one of claims 1 to 6, wherein before receiving the messages to be scheduled and sequentially filling the message information of the messages to be scheduled into the message linked list according to the receiving order, the method further comprises:
creating a message Wen Lianbiao containing a message type and a next message pointer;
Forming the pointer pointing queue according to each next message pointer;
and determining the pointer width of the pointer pointing to the queue according to the depth of the message linked list.
8. A message scheduling system, comprising:
The message receiving module is used for receiving the message to be scheduled, sequentially filling the message information of the message to be scheduled into a message linked list according to the receiving sequence, and determining that the pointer of the message linked list points to a queue; the pointer pointing queue is used for indicating the next message of each message to be scheduled in the scheduling sequence;
The message buffer module is used for sequentially storing the messages to be scheduled into corresponding buffer queues according to the message types; the P_RO queue refers to Posted message queues with a Relaxed Order domain segment of 1; the NPD_RO queue refers to a message queue of Non-Posted With Data with a Relaxed Order domain segment of 1; the P_SO queue refers to Posted message queues with a Relaxed Order domain segment of 0; CPL queue refers to Comletion message queue; the NPR/NPD-SO queue refers to a Non-Posted With Data queue with a Non-Posted Request and a Relaxed Order domain segment of 0; wherein, a 0 Relaxed Order field indicates that the prior message transmission can not be exceeded, and a1 Relaxed Order field indicates that the message can be sequenced;
a message Wen Liukong judging module, configured to judge whether a message type of a first message to be scheduled in the message linked list has enough flow control credit;
The scheduling module is configured to adjust the pointer to point to a queue or determine a next message to be scheduled in the queue, which is different from the first message to be scheduled in type, and reenter the message Wen Liukong determination module when the determination result of the message Wen Liukong determination module is negative; and when the judging result of the judging module of the message Wen Liukong is yes, dispatching the first message to be dispatched in the message linked list from the corresponding cache queue, and redirecting the pointer to the queue according to the message order-preserving requirement.
9. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the message scheduling method of any one of claims 1-7.
10. An electronic device comprising a memory and a processor, wherein the memory has a computer program stored therein, and wherein the processor, when calling the computer program in the memory, implements the steps of the message scheduling method of any of claims 1-7.
CN202410965963.9A 2024-07-18 2024-07-18 Message scheduling method, system, storage medium and electronic equipment Active CN118519729B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410965963.9A CN118519729B (en) 2024-07-18 2024-07-18 Message scheduling method, system, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410965963.9A CN118519729B (en) 2024-07-18 2024-07-18 Message scheduling method, system, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN118519729A CN118519729A (en) 2024-08-20
CN118519729B true CN118519729B (en) 2024-10-22

Family

ID=92281125

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410965963.9A Active CN118519729B (en) 2024-07-18 2024-07-18 Message scheduling method, system, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN118519729B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106411872A (en) * 2016-09-21 2017-02-15 杭州迪普科技有限公司 Method and device for compressing messages based on data message classification
CN116155828A (en) * 2022-12-21 2023-05-23 北京云豹创芯智能科技有限公司 Message order keeping method and device for multiple virtual queues, storage medium and electronic equipment

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1728698B (en) * 2004-07-30 2010-08-25 国家数字交换系统工程技术研究中心 Parallel structured order preserved flow equilibrium system, and method for dispatching message based on sorted stream
CN102204183A (en) * 2011-05-09 2011-09-28 华为技术有限公司 Message order-preserving processing method, order-preserving coprocessor and network equipment

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106411872A (en) * 2016-09-21 2017-02-15 杭州迪普科技有限公司 Method and device for compressing messages based on data message classification
CN116155828A (en) * 2022-12-21 2023-05-23 北京云豹创芯智能科技有限公司 Message order keeping method and device for multiple virtual queues, storage medium and electronic equipment

Also Published As

Publication number Publication date
CN118519729A (en) 2024-08-20

Similar Documents

Publication Publication Date Title
US8930593B2 (en) Method for setting parameters and determining latency in a chained device system
TWI559706B (en) Packet scheduling in a network processor
TWI668975B (en) Circuit and method for packet shaping in a network processor
US7117308B1 (en) Hypertransport data path protocol
CN106095604A (en) The communication method between cores of a kind of polycaryon processor and device
US8199648B2 (en) Flow control in a variable latency system
WO2021197128A1 (en) Traffic rate-limiting method and apparatus
US20180349300A1 (en) Hardware queue manager with water marking
CN115934625B (en) Doorbell knocking method, equipment and medium for remote direct memory access
CN117560433A (en) DPU (digital versatile unit) middle report Wen Zhuaifa order preserving method and device, electronic equipment and storage medium
CN115102908B (en) Method for generating network message based on bandwidth control and related device
US10305772B2 (en) Using a single work item to send multiple messages
EP2568388B1 (en) Processor to message-based network interface using speculative techniques
CN102882809A (en) Network speed-limiting method and device based on message buffering
CN113157465B (en) Message sending method and device based on pointer linked list
CN118519729B (en) Message scheduling method, system, storage medium and electronic equipment
CN113791892B (en) Data path arbitration method, data path arbitration device and chip
US9338219B2 (en) Direct push operations and gather operations
CN109426562B (en) priority weighted round robin scheduler
CN101185056B (en) Data pipeline management system and method for using the system
CN115460152A (en) Method, system, storage medium and electronic device for controlling multicast message
CN112486859A (en) Data stream processing method and device, computer readable storage medium and equipment
CN117312013A (en) Interactive queue management method and device based on active writing back of message queue pointer
CN119248707A (en) Data transmission method, electronic device, storage medium and program
CN107506491A (en) The osd data distribution method and device of a kind of distributed file system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant