Disclosure of Invention
The present invention is directed to solving, at least to some extent, one of the technical problems in the related art. Therefore, a first object of the present invention is to provide a data push method, which can make data push and data update parallel, reduce or eliminate the time waiting for data update, thereby improving the real-time performance of interrupt response.
The second purpose of the present invention is to provide a data push apparatus.
A third object of the present invention is to provide a chip.
A fourth object of the invention is to propose a computer-readable storage medium.
In order to achieve the above object, an embodiment of a first aspect of the present invention provides a data push method, where the method includes: when a stack pushing action occurs, if the context data which are not updated completely exist, skipping the context data which are not updated completely, and after all or part of other context data are pushed completely, re-pushing the context data which are not updated completely; and if the context data which are not updated completely do not exist, pushing all the context data in sequence.
According to the data push method provided by the embodiment of the invention, when a push action occurs, if the context data which is not updated completely exists, the context data which is not updated completely is skipped, and the context data which is not updated completely is pushed again after all or part of other context data is pushed; and if the context data which are not updated completely do not exist, pushing all the context data in sequence. Therefore, when the context data which is not updated completely exists, the data push and the data updating are enabled to be parallel, the time for waiting for the data updating is reduced or eliminated, and the real-time performance of the interrupt response is improved.
According to an embodiment of the present invention, skipping the context data which is not updated completely, and re-pushing the context data which is not updated completely after all or part of the other context data is pushed completely, comprises: and firstly, sequentially pushing all the context data, and when the context data which is not updated is pushed to the context data which is not updated, skipping the context data which is not updated and pushing the subsequent context data if the context data which is not updated is determined to be not updated yet, until the subsequent context data is pushed, then pushing the context data which is not updated yet.
According to an embodiment of the present invention, skipping the context data which is not updated completely, and re-pushing the context data which is not updated completely after all or part of the other context data is pushed completely, further comprises: and when the context data which are not updated are pushed, if the context data which are not updated are determined to be updated, all the context data are pushed in sequence.
According to another embodiment of the present invention, skipping the context data of the incomplete update and re-stacking the context data of the incomplete update after all or part of the other context data is pushed, comprises: and sequentially pushing and sequencing the context data behind the context data which is not updated, sequentially pushing and sequencing the context data in front of the context data which is not updated, and finally pushing the context data which is not updated.
According to another embodiment of the present invention, skipping the context data of the incomplete update and re-pushing the context data of the incomplete update after all or part of the other context data is pushed, comprises: dividing all context data into a plurality of data intervals; after the data interval in which the context data which is not updated is located is determined, the context data corresponding to other data intervals is firstly pushed, and then the context data corresponding to the data interval in which the context data which is not updated is located is pushed.
According to an embodiment of the present invention, the multiple data intervals include a first data interval and a second data interval having the same data length, and the data update duration of the context data that is not updated is smaller than the data push duration of the first data interval and the second data interval, and further, the context data corresponding to the other data intervals is pushed first, and the context data corresponding to the data interval in which the context data that is not updated is located is pushed again, including: when the context data which is not updated is in the first data interval, the context data corresponding to the second data interval is firstly pushed, and then the context data corresponding to the first data interval is pushed; when the context data which is not updated is in the second data interval, the context data corresponding to the first data interval is firstly pushed, and the context data corresponding to the second data interval is pushed again.
According to another embodiment of the present invention, the multiple data intervals include a third data interval, a fourth data interval, and a fifth data interval, the data lengths of the third data interval and the fourth data interval are the same, the context data that is not updated is located in the third data interval or the fourth data interval, and the data update time length of the context data that is not updated is smaller than the data push time length of the third data interval and the fourth data interval, further, the context data corresponding to other data intervals is pushed first, and the context data corresponding to the data interval where the context data that is not updated is located is pushed again, including: when the context data which is not updated is in the third data interval, the context data corresponding to the fourth data interval is firstly pushed, the context data corresponding to the third data interval is pushed again, and finally the context data corresponding to the fifth data interval is pushed; when the context data which is not updated is in the fourth data interval, the context data corresponding to the third data interval is firstly pushed, then the context data corresponding to the fourth data interval is pushed, and finally the context data corresponding to the fifth data interval is pushed.
According to one embodiment of the invention, the context data includes one or more of program status register data, fixed point register data, floating point register data, and interrupt return addresses.
According to one embodiment of the invention, pushing context data of incomplete updates comprises: if the context data which are not updated are determined to be updated completely, pushing the updated context data; and if the situation that the context data which is not updated is determined not to be updated is not updated, the context data which is updated is pressed again after the updating is finished.
In order to achieve the above object, an embodiment of a second aspect of the present invention provides a data push apparatus, including: the determining module is used for determining context data which is not updated completely when the push action occurs; the stack pushing module is used for skipping the context data which are not updated completely when the determining module determines that the context data which are not updated completely exist, and pushing the context data which are not updated completely after all or part of other context data are pushed completely, and pushing all the context data in sequence when the determining module determines that the context data which are not updated completely do not exist.
According to the data push device provided by the embodiment of the invention, the determination module determines the context data which are not updated completely when the push action occurs, the push module skips the context data which are not updated completely when the context data which are not updated completely exist, and pushes the context data which are not updated completely after all or part of other context data are pushed completely, and all the context data are pushed in sequence when the context data which are not updated completely do not exist. Therefore, when the context data which is not updated completely exists, the data push and the data updating are enabled to be parallel, the time for waiting for the data updating is reduced or eliminated, and the real-time performance of the interrupt response is improved.
According to an embodiment of the present invention, the push module is specifically configured to: and firstly, sequentially pushing all the context data, and when the context data which is not updated is pushed to the context data which is not updated, skipping the context data which is not updated and pushing the subsequent context data if the context data which is not updated is determined to be not updated yet, until the subsequent context data is pushed, then pushing the context data which is not updated yet.
According to an embodiment of the present invention, the push module is specifically configured to: and sequentially pushing and sequencing the context data behind the context data which is not updated, sequentially pushing and sequencing the context data in front of the context data which is not updated, and finally pushing the context data which is not updated.
According to an embodiment of the present invention, the push module is specifically configured to: dividing all the context data into a plurality of data intervals, after determining the data interval in which the context data which is not updated is located, firstly, pushing the context data corresponding to other data intervals, and then, pushing the context data corresponding to the data interval in which the context data which is not updated is located.
In order to achieve the above object, a third embodiment of the present invention provides a chip, which includes a memory and a processor, where the memory stores a computer program, and the processor implements the steps of the data push method when executing the computer program.
According to the chip provided by the embodiment of the invention, by the data push method, when context data which are not updated completely exist, data push and data updating can be performed in parallel, and the time for waiting for data updating is reduced or eliminated, so that the real-time performance of interrupt response is improved.
To achieve the above object, a fourth aspect of the present invention provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of the data push method as described above.
According to the computer-readable storage medium of the embodiment of the invention, by the data push method, when context data which are not updated completely exist, data push and data updating can be performed in parallel, and the time for waiting for data updating is reduced or eliminated, so that the real-time performance of interrupt response is improved.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are illustrative and intended to be illustrative of the invention and are not to be construed as limiting the invention.
Generally, when the context data is pushed, it is required to ensure that the data pushed into the stack is data with a fresh program, that is, when a push action occurs, if an instruction is not executed and the instruction updates the context data to be pushed, the push action needs to be performed on the context data until the instruction is executed, and if the instruction needs to consume more clock cycles, the response time of an exception or an interrupt is significantly increased, so that the real-time performance of interrupt response is reduced, which is particularly obvious when the floating-point context push is performed, and particularly when the currently executed instruction is a floating-point division instruction. For example, if floating-point division is being performed and the floating-point register is to be updated, 16 clock cycles are required to be performed, and the fixed-point context push requires 8 clock cycles, the processor needs to wait for 8 clock cycles after the fixed-point context push is completed to ensure that the data of the floating-point register is the fresh data after the floating-point division operation is completed, which significantly increases the response time of the MCU to the interrupt. Based on this, the application provides a data push optimization method, which can reduce the time for waiting for the completion of instruction execution by overlapping the push time and the instruction execution time, thereby improving the real-time performance of the MCU for the interrupt response.
Fig. 1 is a flowchart of a data push method according to an embodiment of the present invention, and as shown in fig. 1, the data push method may include the following steps:
step S100, when a push action occurs, if the context data which is not updated completely exists, skipping the context data which is not updated completely, and after all or part of other context data is pushed completely, re-pushing the context data which is not updated completely.
That is to say, when a push action occurs, the context data which is not updated is not pushed, but after all or part of other context data is pushed, the context data which is not updated before being pushed is pushed again, and during this period, the instruction is always in an execution state and updates the context data which is not updated, so that after all or part of other context data is pushed, the context data which is not updated before is pushed, and the time waiting for instruction execution can be reduced or eliminated, that is, the time waiting for data update is reduced or eliminated, and the interrupt response time is further improved.
For example, if the instruction execution time is slightly less than the time for pushing all other context data, the previously uncompleted updated context data can be pushed after the pushing of all other context data is completed, and the previously uncompleted updated context data is already updated after the pushing of all other context data is completed, and at this time, the updated context data is directly pushed, so that the time for waiting for data update can be eliminated; if the instruction execution time is far shorter than the time for pushing all the other context data, the previously uncompleted updated context data can be pushed after the pushing of the other context data is completed, and the context data which is updated is directly pushed at the moment because the previously uncompleted updated context data is updated after the pushing of the other context data is completed, so that the time for waiting for updating the data can be eliminated. If the instruction execution time is longer than the time for pushing all the other context data, the previously uncompleted updated context data can be pushed after all the other context data are pushed, and the partially updated context data are not updated yet after all the other context data are pushed, so that the time for waiting for updating the data cannot be eliminated, but the time for waiting for updating the data can be reduced.
In some embodiments, pushing the uncompleted updated context data includes: after the stack pushing of the context data except the context data which is not updated is finished, if the context data which is not updated is determined to be updated, the stack pushing of the context data which is updated is finished; and if the situation that the context data which is not updated is determined not to be updated is not updated, the context data which is updated is pressed again after the updating is finished. That is to say, when all the other context data except the context data which is not updated is completely pushed, before pushing the previous context data which is not updated, it is determined whether the previous context data which is not updated is completely updated, and if the previous context data is completely updated, the updated context data is pushed; otherwise, the stack is pushed after the context data which is not updated is updated, so as to ensure the freshness of the data.
Step S200, if the context data which are not updated completely do not exist, all the context data are pushed in sequence.
That is, when the push action occurs, if all the context data have been updated, all the context data are pushed in order directly, which may specifically adopt the prior art and is not described in detail herein.
Optionally, the context data may include one or more of program status register data, fixed point register data, floating point register data, and interrupt return addresses.
Further, as a specific example, as shown in fig. 2, the data push method may include the following steps:
step S101, responding to the interrupt request.
Step S102, determine whether there are context registers (e.g., program status register, fixed point register, floating point register) that have not been updated currently. If yes, executing step S103; otherwise, step S107 is executed.
Step S103, recording the context register to be updated, and pushing other context data, such as context data that has been updated or does not need to be updated.
It should be noted that outstanding instructions are also being executed in parallel while pushing other context data.
And step S104, after the stack pushing of other context data is finished, judging whether the context register to be updated is finished updating. If yes, executing step S105; otherwise, step S106 is executed.
And step S105, pushing the context register data to be updated until the pushing is finished.
Step S106, waiting for the context register to be updated to complete updating, and after completing updating, executing step S105.
And step S107, sequentially pushing the context data until the pushing is finished.
Therefore, in the process of pushing, if the context data which is not updated completely exists, the context data which is updated completely or does not need to be updated is skipped over firstly, and the pushing operation and the operation which is not updated completely are carried out in parallel, namely, the instruction execution time and the pushing time are overlapped, so that the time waiting for the completion of the instruction execution can be reduced or eliminated.
Three examples of specific data push modes are extended according to the above data push method.
As a first example, skipping the context data that has not been updated and re-stacking the context data that has not been updated after all or part of the other context data has been pushed includes: and firstly, sequentially pushing all the context data, and when the context data which is not updated is pushed to the context data which is not updated, skipping the context data which is not updated and pushing the subsequent context data if the context data which is not updated is determined to be not updated yet, until the subsequent context data is pushed, then pushing the context data which is not updated yet.
That is to say, when a push operation occurs, if there is context data that has not been updated, all the context data is pushed in order, when the push operation reaches the context data that has not been updated, it is determined whether the context data that has not been updated has been updated, if the context data that has not been updated, the context data that has not been updated is skipped over, the context data behind the push operation is continued, and after the following context data is pushed, the context data that has not been updated before the push operation is removed. Further, when the context data which are not updated are pushed, if the context data which are not updated are updated, all the context data are pushed in sequence.
As a specific example, as shown in fig. 3, the whole data push process may include the following steps:
step S201, responding to the interrupt request.
In step S202, it is determined whether there are context registers (e.g., program status register, fixed point register, and floating point register) that have not been updated. If yes, go to step S203; otherwise, step S209 is performed.
Step S203, recording the context register to be updated, and pushing all the context data in sequence.
Step S204, when the context register to be updated is pushed, whether the register is updated already is judged. If so, go to step S209; otherwise, step S205 is executed.
Step S205 skips the context register to be updated, and pushes the subsequent context data.
Step S206, after the subsequent context data push is completed, it is determined whether the context register to be updated is completely updated. If yes, go to step S207; otherwise, step S208 is performed.
Step S207, push the context register to be updated until the push is completed.
Step S208, wait for the update of the context register to be updated to be completed, and after the update is completed, execute step S207.
And step S209, sequentially pushing the context data until the pushing is finished.
Therefore, when data is pushed, the context data is pushed in sequence, when the context data which is not updated is pushed, the context data which is not updated is skipped over, the subsequent context data is pushed, and finally the context data which is not updated is pushed, so that the pushing time and the time for executing the instruction can be overlapped, the time for waiting for executing the instruction is shortened or eliminated, and the interrupt response time is improved.
As a second example, skipping the context data that has not been updated and re-stacking the context data that has not been updated after all or part of the other context data has been pushed includes: and sequentially pushing and sequencing the context data behind the context data which is not updated, sequentially pushing and sequencing the context data in front of the context data which is not updated, and finally pushing the context data which is not updated.
That is, the data push can be performed in a round-robin manner, that is, the context data which is not updated is determined when the data push starts, then the data after the context data which is not updated is pushed, the data before the context which is not updated is pushed again, and finally the context data which is not updated is pushed again.
As a specific example, as shown in fig. 4, the data push process may include the following steps:
step S301, responding to the interrupt request.
In step S302, it is determined whether there are context registers (e.g., program status register, fixed point register, floating point register) that have not been updated. If yes, executing step S303; otherwise, step S308 is executed.
Step S303, recording the context register index T to be updated, and pushing the stack to the tail of the stack in sequence from the index position of T + 1. Then, step S209 is executed to push the stack from the position of the stack start to the register index T-1.
Step S304, push the stack from the position of the stack start to the register index T-1.
In step S305, it is determined whether T completes updating. If yes, executing step S306; otherwise, step S307 is executed.
Step S306, the data corresponding to the register index T is pushed until the pushing is finished.
Step S307, waiting for the data update corresponding to the register index T to be completed, and after the update is completed, executing step S306.
And step S308, sequentially pushing the context data until the pushing is finished.
Therefore, when data is pushed, if the existence of the context data which is not updated is confirmed, the subsequent context data is pushed to the bottom of the stack, then the context data is pushed from the top of the stack until the context data which is not updated is pushed, and finally the context data which is not updated is pushed again, so that the pushing time and the time for executing the instruction can be overlapped, the time for waiting for executing the instruction is shortened or eliminated, and the interrupt response time is improved.
As a third example, skipping the context data that has not been updated and re-stacking the context data that has not been updated after all or part of the other context data has been pushed includes: dividing all context data into a plurality of data intervals; after the data interval in which the context data which is not updated is located is determined, the context data corresponding to other data intervals is firstly pushed, and then the context data corresponding to the data interval in which the context data which is not updated is located is pushed.
That is, the areas of the stack may be divided, and by adjusting the stacking order of the areas, an area where the context data that does not complete updating does not exist is stacked first, and an area where the context data that does not complete updating is stacked again.
In some embodiments, the multiple data intervals include a first data interval and a second data interval that have the same data length, and the data update duration of the context data that has not been updated is smaller than the data push duration of the first data interval and the second data interval, and further, first pushing context data corresponding to other data intervals, and then pushing context data corresponding to the data interval in which the context data that has not been updated is located, includes: when the context data which is not updated is in the first data interval, the context data corresponding to the second data interval is firstly pushed, and then the context data corresponding to the first data interval is pushed; when the context data which is not updated is in the second data interval, the context data corresponding to the first data interval is firstly pushed, and the context data corresponding to the second data interval is pushed again.
As a specific example, as shown in fig. 5, the data push process may include the following steps:
step S401, responding to the interrupt request.
Step S402, judging whether there is context register which is not updated currently. If yes, go to step S403; otherwise, step S409 is performed.
Step S403, recording context registers to be updated, and dividing all context data into two intervals: [0, a) and [ a, b ]. It should be noted that the update duration of the context register to be updated, that is, the duration of the execution of the corresponding instruction, is less than the data push duration corresponding to the interval [0, a) and less than the data push duration corresponding to the interval [ a, b ], so that it can be ensured that the context register to be updated is also updated after the data push corresponding to one interval is completed, and thus the duration of waiting for the data update, that is, the duration of the execution of the instruction, can be eliminated.
Step S404, judge whether the context register to be updated is in the interval [0, a ]. If yes, go to step S405; otherwise, step S407 is executed.
Step S405, the data in the section [ a, b ] is pushed.
Step S406, after the data in the interval [ a, b ] is pushed, the data in the interval [0, a) is pushed until the pushing is finished.
Step S407, push the data in the interval [0, a).
Step S408, after the data in the interval [0, a) is pushed, the data in the interval [ a, b ] is pushed until the pushing is finished.
And step S409, sequentially pushing all the context data until the pushing is finished.
Therefore, when the execution time of the unexecuted instruction is less than one half of the total stack pushing time, the stack content can be divided into two equal-length areas [0, a ] and [ a, b ], and the waiting time caused by data correlation is eliminated by adjusting the stack pushing sequence of the two areas, so that the interrupt response time is improved.
In other embodiments, the multiple data intervals include a third data interval, a fourth data interval, and a fifth data interval, the data lengths of the third data interval and the fourth data interval are the same, the context data that is not updated is located in the third data interval or the fourth data interval, and the data update duration of the context data that is not updated is smaller than the data push duration of the third data interval and the fourth data interval, further, context data corresponding to other data intervals are pushed first, and context data corresponding to the data interval where the context data that is not updated is pushed again includes: when the context data which is not updated is in the third data interval, the context data corresponding to the fourth data interval is firstly pushed, the context data corresponding to the third data interval is pushed again, and finally the context data corresponding to the fifth data interval is pushed; when the context data which is not updated is in the fourth data interval, the context data corresponding to the third data interval is firstly pushed, then the context data corresponding to the fourth data interval is pushed, and finally the context data corresponding to the fifth data interval is pushed.
That is, the data interval may include two or three. When the execution time of the unexecuted instruction is less than and close to one half of the total stack pushing time, the data interval may include two data intervals, such as a first data interval and a second data interval, and the specific data pushing process is as described above; when the unexecuted instruction execution time is far less than the total push time, the data intervals may include three data intervals, such as a third data interval, a fourth data interval and a fifth data interval, where the data lengths of the two data intervals are the same, that is, the push times are the same, the uncompleted updated context data is in one of the two data intervals, and the data update duration of the uncompleted updated context data is respectively less than the data push duration of the two data intervals, that is, the intervals may be divided into three intervals based on the instruction execution time and the position of the uncompleted updated context data, where the data lengths of the two intervals are the same and the corresponding data push duration is slightly greater than the instruction execution duration, so that when one of the intervals is pushed, the instruction can be executed, that is, the uncompleted updated context data is updated, and then data push is performed based on the three divided data intervals, reference may be made in particular to the example shown in fig. 6.
As a specific example, referring to fig. 6, the data push process may include the following steps:
step S501, responding to the interrupt request.
Step S502, judge whether there is context register that has not finished updating at present. If yes, go to step S503; otherwise, step S510 is executed.
Step S503, recording the context register to be updated, and dividing all the context data into three intervals: [0, a), [ a, b) and [ b, c ]. It should be noted that the update duration of the context register to be updated, that is, the duration of the execution of the corresponding instruction, is less than the data push duration corresponding to the interval [0, a) and less than the data push duration corresponding to the interval [ a, b), and the context register to be updated is located in the interval [0, a) or the interval [ a, b), so that it can be ensured that the context register to be updated is also updated after the data push corresponding to one of the two intervals is completed, and thus the duration of waiting for the data update, that is, the duration of the instruction execution can be eliminated.
Step S504, judge whether the context register to be updated is in the interval [0, a ]. If yes, go to step S505; otherwise, step S507 is executed.
Step S505, push the data in the section [ a, b).
Step S506, after the data in the interval [ a, b) is pushed, the data in the interval [0, a) is pushed until the pushing is finished.
Step S507, push the data in the interval [0, a).
Step S508, after the push of the data in the interval [0, a) is completed, the data in the interval [ a, b) is pushed until the push is completed.
Step S509, push the data in the interval [ b, c ] until the push is completed.
And step S510, sequentially pushing all the context data until the pushing is finished.
Therefore, the stack content can be divided into multiple sections for processing, the stack content is divided into three sections [0, a ], [ a, b) and [ b, c ] as above, the size of the [0, a) section is equal to that of the [ a, b), the execution time of the instruction is smaller than the time of pushing the [0, a) or [ a, b) corresponding data, and the data to be updated is in any section of the two sections, so that the instruction can be ensured to complete the updating of the context data in the pushing process, and the waiting time cannot be introduced in the pushing process.
Further, in order to make the data push process corresponding to fig. 6 more clear to those skilled in the art, a more specific example is described below.
For example, taking an Arm-M that is a common instruction set architecture for an embedded processor as an example, when a floating-point division instruction is not completed and a current program execution encounters an interrupt request, a push of a fixed-point context may be performed first, and the floating-point division and push processes are performed in parallel, and then when the push of the floating-point context is started, it is determined whether a target register index of the floating-point division falls within an interval from S0 to S7, and if so, it is determined that there is data that has not been updated in the floating-point registers from S0 to S7, and at this time, the data push process from S0 to S7 is skipped, and the data push of the floating-point registers from S8 to S15 is started, thereby avoiding a time delay caused by waiting for an update of the floating-point division target register. After the data push of the floating-point registers of S8 to S15 is completed, the push of the fixed-point context and the floating-point context takes 16 clock cycles (the push of the fixed-point context takes 8 clock cycles), and if the number of division operation cycles is less than or equal to 16, the data of S0 to S7 are updated to obtain the operation result of the floating-point division, so that the data of S0 to S7 can be continuously pushed to the stack, and the time for the push to wait for the update of the floating-point registers is eliminated. Similarly, if the floating-point division target register is not in the interval from S0 to S7, the contexts of the floating-point registers from S0 to S7 are pushed onto the stack, after the context push from S0 to S7 is completed, the pushing of the fixed-point context and the floating-point context also takes 16 clock cycles, and when the clock cycles of the division operation are less than or equal to 16, the result of the division is updated to the target register, so that the subsequent floating-point context push operation can be continued, and the time for the floating-point push to wait for the division to update the target register is eliminated. The specific data push process is shown in fig. 7, and may include the following steps:
step S601, responding to the interrupt request.
Step S602, push fixed point register data.
In step S603, it is determined whether the floating-point division target register is in S0 to S7. If yes, go to step S604; otherwise, step S606 is executed.
In step S604, the data of the floating-point registers S8 to S15 are pushed.
In step S605, after the data push of the floating point registers S8 to S15 is completed, the data of the floating point registers S0 to S7 are pushed.
In step S606, the data of the floating-point registers S0 to S7 are pushed.
In step S607, after the data push of the floating point registers S0 to S7 is completed, the data of the floating point registers S8 to S15 are pushed.
In step S608, the data of the floating-point registers S16 to S31 are pushed.
And step S609, pushing the floating point status register data until the pushing is finished.
Furthermore, the push method can be expanded to adapt to a longer division operation clock period, so that the push method has universality.
Specifically, assuming that the operation cycle of the floating point is m and the clock cycle of the push fixed point context is n, the floating point context index may be selected as t ═ m-n, and the floating point register is divided into three sections: s (0) to S (t-1), S (t) to S (2t-1) and S (2t) to S31, wherein when the index of the floating-point division target register is in the interval from S0 to S (t-1), the stack pushing of the data of the floating-point register from S (t) to S (2t-1) is firstly carried out, and then the stack pushing of the data of the floating-point register from S0 to S (t-1) is carried out; and when the index of the floating-point division target register is not in the interval from S (0) to S (t-1), firstly performing stack pushing on the data of the floating-point registers from S (0) to S (t-1), and then performing subsequent floating-point stack pushing. The specific data push process is shown in fig. 8, and may include the following steps:
step S701, responding to the interrupt request.
Step S702, push fixed point register data.
In step S703, it is determined whether the floating-point division target register is in S (0) to S (t-1). If so, go to step S704; otherwise, step S706 is performed.
Step S704, push the data of the floating-point registers S (t) to S (2 t-1).
In step S705, after the data push of the floating-point registers S (t) to S (2t-1) is completed, the data of the floating-point registers S (0) to S (t-1) are pushed.
In step S706, the data of the floating-point registers S (0) to S (t-1) are pushed.
In step S707, after the data push of the floating-point registers S (0) to S (t-1) is completed, the data of the floating-point registers S (t) to S (2t-1) are pushed.
In step S708, the data of the floating-point registers S (2t) to S31 are pushed.
Step S709, push the floating point status register data until the push is completed.
It should be noted that the three implementation manners have advantages in applicability and practicability, and can be flexibly selected according to the actual instruction execution time and the number of context data, and the specific details are not limited herein; moreover, the data push method is not only suitable for pushing fixed-point data, but also suitable for pushing floating-point data, such as floating-point division, floating-point multiplication and other multi-cycle floating-point or fixed-point instructions, wherein due to the fact that the execution time of the floating-point division instruction is usually long, the real-time performance of interrupt response can be improved more obviously by the data push method. In addition, the idea of the data push method in the present application is to adjust the context push sequence according to the context data to be updated, so as to hide the update time of the data to be updated in the context data push process, thereby reducing or eliminating the time waiting for data update.
In summary, according to the data push method of the embodiment of the present invention, when a push action occurs, if there is context data that is not updated, the context data that is not updated is skipped, and the context data that is not updated is pushed again after all or part of other context data is pushed; and if the context data which are not updated completely do not exist, pushing all the context data in sequence. Therefore, when the context data which is not updated completely exists, the data push and the data updating are enabled to be parallel, the time for waiting for the data updating is reduced or eliminated, and the real-time performance of the interrupt response is improved.
Fig. 9 is a schematic structural diagram of a data push apparatus according to an embodiment of the present invention, and referring to fig. 9, the data push apparatus 50 may include: a determination module 51 and a push module 52.
The determining module 51 is configured to determine context data that is not updated completely when a push action occurs. The pushing module 52 is configured to skip the context data that is not updated completely when the determining module determines that the context data that is not updated completely exists, and push the context data that is not updated completely after all or part of other context data is pushed completely, and push all the context data in sequence when the determining module determines that the context data that is not updated completely does not exist.
As a first embodiment, the push module 52 is specifically configured to: and firstly, sequentially pushing all the context data, and when the context data which is not updated is pushed to the context data which is not updated, skipping the context data which is not updated and pushing the subsequent context data if the context data which is not updated is determined to be not updated yet, until the subsequent context data is pushed, then pushing the context data which is not updated yet.
Further, in some embodiments, the push module 52 is further configured to: and when the context data which are not updated are pushed, if the context data which are not updated are determined to be updated, all the context data are pushed in sequence.
As a second embodiment, the push module 52 is specifically configured to: and sequentially pushing and sequencing the context data behind the context data which is not updated, sequentially pushing and sequencing the context data in front of the context data which is not updated, and finally pushing the context data which is not updated.
As a third embodiment, the push module 52 is specifically configured to: dividing all context data into a plurality of data intervals; after the data interval in which the context data which is not updated is located is determined, the context data corresponding to other data intervals is firstly pushed, and then the context data corresponding to the data interval in which the context data which is not updated is located is pushed.
Further, in some embodiments, the multiple data intervals include a first data interval and a second data interval that have the same data length, and the data update duration of the context data that has not been updated is smaller than the data push duration of the first data interval and the second data interval, respectively, and the push module 52 is further configured to: when the context data which is not updated is in the first data interval, the context data corresponding to the second data interval is firstly pushed, and then the context data corresponding to the first data interval is pushed; when the context data which is not updated is in the second data interval, the context data corresponding to the first data interval is firstly pushed, and the context data corresponding to the second data interval is pushed again.
Further, in some embodiments, the multiple data intervals include a third data interval, a fourth data interval, and a fifth data interval, the data lengths of the third data interval and the fourth data interval are the same, the context data that is not updated is located in the third data interval or the fourth data interval, the data update duration of the context data that is not updated is smaller than the data push duration of the third data interval and the fourth data interval, and the push module 52 is further configured to: when the context data which is not updated is in the third data interval, the context data corresponding to the fourth data interval is firstly pushed, the context data corresponding to the third data interval is pushed again, and finally the context data corresponding to the fifth data interval is pushed; when the context data which is not updated is in the fourth data interval, the context data corresponding to the third data interval is firstly pushed, then the context data corresponding to the fourth data interval is pushed, and finally the context data corresponding to the fifth data interval is pushed.
In some embodiments, the context data includes one or more of program status register data, fixed point register data, floating point register data, and interrupt return addresses.
In some embodiments, the push module 52 is further configured to: if the context data which are not updated are determined to be updated completely, pushing the updated context data; and if the situation that the context data which is not updated is determined not to be updated is not updated, the context data which is updated is pressed again after the updating is finished.
It should be noted that, for the description of the data push apparatus in the present application, please refer to the description of the data push method in the present application, and details are not repeated here.
According to the data push device provided by the embodiment of the invention, the determination module determines the context data which are not updated completely when the push action occurs, the push module skips the context data which are not updated completely when the context data which are not updated completely exist, and pushes the context data which are not updated completely after all or part of other context data are pushed completely, and all the context data are pushed in sequence when the context data which are not updated completely do not exist. Therefore, when the context data which is not updated completely exists, the data push and the data updating are enabled to be parallel, the time for waiting for the data updating is reduced or eliminated, and the real-time performance of the interrupt response is improved.
In some embodiments, an embodiment of the present invention further provides a chip including a memory and a processor, where the memory stores a computer program, and the processor implements the steps of the data push method as described above when executing the computer program.
According to the chip provided by the embodiment of the invention, by the data push method, when context data which are not updated completely exist, data push and data updating can be performed in parallel, and the time for waiting for data updating is reduced or eliminated, so that the real-time performance of interrupt response is improved.
In some embodiments, embodiments of the present invention also provide a computer-readable storage medium, on which a computer program is stored, which, when executed by a processor, implements the steps of the data push method as described above.
According to the computer-readable storage medium of the embodiment of the invention, by the data push method, when context data which are not updated completely exist, data push and data updating can be performed in parallel, and the time for waiting for data updating is reduced or eliminated, so that the real-time performance of interrupt response is improved.
It should be noted that the logic and/or steps represented in the flowcharts or otherwise described herein, such as an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present invention, "a plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
In the present invention, unless otherwise expressly stated or limited, the terms "mounted," "connected," "secured," and the like are to be construed broadly and can, for example, be fixedly connected, detachably connected, or integrally formed; can be mechanically or electrically connected; they may be directly connected or indirectly connected through intervening media, or they may be connected internally or in any other suitable relationship, unless expressly stated otherwise. The specific meanings of the above terms in the present invention can be understood by those skilled in the art according to specific situations.
Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present invention.