[go: up one dir, main page]

CN115129617B - Data processing method, processor and electronic equipment - Google Patents

Data processing method, processor and electronic equipment Download PDF

Info

Publication number
CN115129617B
CN115129617B CN202210897134.2A CN202210897134A CN115129617B CN 115129617 B CN115129617 B CN 115129617B CN 202210897134 A CN202210897134 A CN 202210897134A CN 115129617 B CN115129617 B CN 115129617B
Authority
CN
China
Prior art keywords
buffer
data
processor
eviction
request
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210897134.2A
Other languages
Chinese (zh)
Other versions
CN115129617A (en
Inventor
翁志强
王琪
李耀
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ARM Technology China Co Ltd
Original Assignee
ARM Technology China Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ARM Technology China Co Ltd filed Critical ARM Technology China Co Ltd
Priority to CN202210897134.2A priority Critical patent/CN115129617B/en
Publication of CN115129617A publication Critical patent/CN115129617A/en
Application granted granted Critical
Publication of CN115129617B publication Critical patent/CN115129617B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0806Multiuser, multiprocessor or multiprocessing cache systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/14Handling requests for interconnection or transfer
    • G06F13/16Handling requests for interconnection or transfer for access to memory bus
    • G06F13/1668Details of memory controller
    • G06F13/1673Details of memory controller using buffers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/38Information transfer, e.g. on bus
    • G06F13/40Bus structure
    • G06F13/4063Device-to-bus coupling
    • G06F13/4068Electrical coupling
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The application relates to the technical field of processors and discloses a data processing method, a processor and electronic equipment. The electronic device comprises a processor and a memory, wherein the processor comprises a line filling buffer, a line request buffer and an eviction buffer; the method comprises the following steps: the processor detects a data reading request; the processor judges the working states of the current line filling buffer and the eviction buffer; corresponding to the condition that the working states of the line filling buffer and the eviction buffer are both a first type state, the processor temporarily stores a data reading request, wherein the first type state is a non-idle state; corresponding to the condition that the working state of the linefilling buffer or the eviction buffer is a second type state, the processor sends a data reading request to the memory, wherein the second type state is an idle state. According to the scheme, the problem of deadlock caused by incapability of receiving returned data after the data reading request is sent can be effectively avoided, and the area of a processor can be effectively reduced.

Description

Data processing method, processor and electronic equipment
Technical Field
The present application relates to the field of processor technologies, and in particular, to a data processing method, a processor, and an electronic device.
Background
Currently, a central processing unit (Central Processing Unit, CPU) generally first needs to read data from each memory when processing the data. It will be appreciated that since the external memory is far from the CPU, the speed of the CPU acquiring data from the external memory will be slow, and therefore, a cache (cache) and a Line Fill Buffer (LFB) are generally provided inside the CPU, and when the CPU processes data, a request may be sent to the outside through the LFB, and the data in the external memory may be stored in the cache in advance in units of cache lines through the LFB. When the CPU needs to read data, the data can be directly read from the cache memory, and the data processing speed of the CPU is improved.
At present, a CPU sends a plurality of line filling requests to an external memory to obtain corresponding data in a short time, and the CPU generally sets a plurality of LFBs to store the data corresponding to different line filling requests into a high-speed memory, but each LFB occupies a larger CPU area, so that the plurality of LFBs occupy a larger CPU area, resulting in a too large CPU overall area.
Disclosure of Invention
In order to solve the above problems, embodiments of the present application provide a data processing method, a processor, and an electronic device.
In a first aspect, an embodiment of the present application provides a data processing method, for an electronic device, where the electronic device includes a processor and a memory, and the processor includes a line filling buffer, a line request buffer, and an eviction buffer; the method comprises the following steps: the processor detects a data read request; the processor judges the current working states of the line filling buffer and the eviction buffer; corresponding to the situation that the working states of the line filling buffer and the eviction buffer are both a first type state, the processor temporarily stores the data reading request, wherein the first type state is a non-idle state; and the processor sends the data reading request to the memory corresponding to the condition that the working state of the line filling buffer or the eviction buffer is a second type state, wherein the second type state is an idle state.
It can be appreciated that in the embodiment of the present application, the states of the line filling buffer and the eviction buffer may be detected before the data reading request is sent, so when both the line filling buffer and the eviction buffer are in a non-idle state, it is determined that the line filling buffer cannot receive data, and when the temporary data reading request is not sent to the outside, the request data may not be returned, and the problem of deadlock that occurs when the return data cannot be received after the data reading request is sent may be effectively avoided.
According to the data reading method provided by the embodiment of the application, the line request buffer with smaller area can be used for replacing part of the line filling buffer in the processor, so that the area of the processor can be reduced, and the number of data reading requests which can be simultaneously sent by the processor can be prevented from being reduced.
In one possible implementation of the application, the processor includes a cache memory, the method includes: the processor determines a first to-be-stored position of return data corresponding to the data reading request in the cache memory; the processor acquires the state of first data currently stored in the first position to be stored; when the state corresponding to the first data is the modified state, the processor sends an eviction application corresponding to the data reading request to the eviction buffer; and the eviction buffer receives the eviction application and evicts the first data from the processor.
In one possible implementation of the present application, the memory further includes receiving the data read request, and sending second data corresponding to the data read request to the linefill buffer;
The linefill buffer sends the second data to the first to-be-stored location of the cache.
In one possible implementation of the present application, the eviction buffer performs an eviction operation corresponding to each eviction application based on a priority of each eviction application corresponding to each data read request.
In one possible implementation of the present application, in each eviction application corresponding to each data reading request, the priority of the eviction application corresponding to the data reading request sent by the processor is higher than the priority of the eviction application corresponding to the data reading request not sent by the processor.
In a second aspect, the present application provides a processor comprising a linefill buffer, a linerequest buffer, an eviction buffer, and a control unit; the line filling buffer and the line request buffer are used for sending a data reading request; the control unit is used for detecting a data reading request; the control unit is used for judging the current working states of the line filling buffer and the eviction buffer; the control unit is configured to temporarily store the data read request corresponding to a case that working states of the line filling buffer and the eviction buffer are both a first type state, where the first type state is a non-idle state; the control unit is configured to control the line filling buffer and the line request buffer to send the data reading request to the memory, where the second type state is an idle state, corresponding to a situation that the working state of the line filling buffer or the eviction buffer is the second type state.
It can be understood that, by reducing the number of line filling buffers and increasing the corresponding number of line request buffers with smaller area instead of the line filling buffers, the processor provided by the embodiment of the application can effectively reduce the area of the processor.
In a third aspect, the application provides an electronic device comprising a memory and a processor as mentioned in the embodiments of the application.
In a fourth aspect, an embodiment of the present application provides a data processing method, configured to be used in an electronic device, where the electronic device includes a processor and a memory, a data transmission channel between the processor and the memory is a single channel, and the processor includes a line filling buffer, a line request buffer, and an eviction buffer; the method comprises the following steps: the processor detects a data read request; the processor judges the current working state of the eviction buffer; corresponding to the condition that the working state of the eviction buffer is a first type state, the processor temporarily stores the data reading request, wherein the first type state is a non-idle state; and the processor sends the data reading request to the memory corresponding to the condition that the working state of the eviction buffer is a second type state, wherein the second type state is an idle state.
It will be appreciated that if the eviction cache is in a non-idle state, i.e., the eviction cache is evicting data, then the data store request is temporarily stored until the eviction cache is in an idle state. Therefore, the problem of deadlock caused by that the expelling data and the returning data are simultaneously in the channel when the data channel is a single channel when the external memory returns the data after the request is sent to the outside can be effectively avoided.
A fifth aspect of the present application provides an electronic device, including a memory and a processor, where a data transmission channel between the processor and the memory is a single channel, and the processor includes a line filling buffer, a line request buffer, an eviction buffer, and a control unit; the line filling buffer and the line request buffer are used for sending a data reading request; the control unit is used for detecting a data reading request; the control unit is used for judging the current working state of the eviction buffer; the control unit is configured to temporarily store the data read request corresponding to a case that the working state of the eviction buffer is a first type state, where the first type state is a non-idle state; and controlling the line filling buffer and the line request buffer to send the data reading request to the memory under the condition that the working state of the eviction buffer is a second type state, wherein the second type state is an idle state.
In a sixth aspect, an embodiment of the present application provides an electronic device, including: and a memory for storing instructions for execution by one or more processors of the electronic device, and the processor is one of the one or more processors of the electronic device for performing the data processing method according to the embodiment of the present application.
In a seventh aspect, embodiments of the present application provide a readable storage medium having stored thereon instructions that, when executed on an electronic device, cause the electronic device to perform a data processing method as mentioned in the embodiments of the present application.
In an eighth aspect, embodiments of the present application provide a computer program product comprising execution instructions that, when executed on an electronic device, cause the electronic device to perform the data processing method mentioned in the embodiments of the present application.
Drawings
FIG. 1 illustrates a flow diagram of a processor reading data, according to some embodiments of the application;
FIG. 2 illustrates a flow diagram of a processor reading data, according to some embodiments of the application;
FIG. 3 illustrates a schematic diagram of a processor, according to some embodiments of the application;
FIG. 4 illustrates a flow diagram of data processing, according to some embodiments of the application;
fig. 5 illustrates a schematic diagram of an electronic device, according to some embodiments of the application.
Detailed Description
Illustrative embodiments of the application include, but are not limited to, a data processing method, a processor, and an electronic device.
For a clearer understanding of the solution of the present application, a process of reading data by a processor in an embodiment of the present application will be briefly described.
As shown in fig. 1, a linefill buffer, a cache memory, an eviction buffer (Eviction Buffer, EVB), and an AXI bus interface may be included in the processor.
When the processor needs to read data, the processor can send a data reading request to the AXI bus interface through the line filling buffer, send the data reading request to the external memory through the AXI bus interface, send corresponding request data through the external memory through the data reading request, and return the corresponding request data to the line filling buffer through the AXI bus interface, and the line filling buffer sends the request data to the corresponding position of the cache. The eviction buffer is used to evict existing data from the cache memory at the location to be stored of the requested data. Wherein the storage form in the cache memory is stored in cache line units. Thus, storing data into cache memory may be referred to as performing a fill of a cache line.
It will be appreciated that since the requested data corresponding to each data read request has a set location to be stored in the cache memory. Therefore, before sending the data read request, the processor needs to determine whether the currently stored data of the to-be-stored location needs to be evicted from the cache memory according to the state of the currently stored data. If the currently stored data needs to be evicted from the cache, an eviction application is issued to the eviction cache. If the current data can be directly covered without evicting the data, an eviction application is not required to be sent to an eviction buffer.
The method for determining whether the current stored data needs to be evicted from the cache memory by the processor according to the state of the current stored data of the to-be-stored position may be: if the currently stored data is modified data, determining that the currently stored data is data which is processed by the processor and needs to be sent to the outside, and determining that the currently stored data needs to be evicted from the cache memory. If the currently stored data is the original data which is not modified, the currently stored data is determined to be the data which needs to be covered by the new data, the currently stored data does not need to be evicted from the cache memory, and the data can be directly covered after the subsequent new data arrives at the cache memory. If the data is not stored in the position to be stored, it is determined that the eviction application does not need to be sent to the eviction buffer.
For example, as shown in fig. 2, if the request data is CL1, and it is determined that the location to be stored set in the cache memory by the request data CL1 is the location where the data CL3 is located, and it is determined that CL3 is modified data, it is determined that the data CL3 is data processed by the processor and needs to be sent to the outside, and the currently stored data needs to be evicted from the cache memory. The processor may issue an eviction request to the eviction cache at this time to cause the eviction cache to evict data CL3 from the cache, i.e., to cache data CL3 from the cache to the eviction cache, and to send data CL3 via the eviction cache to a corresponding location external to the processor via the AXI bus interface.
As described above, the CPU generally sets a plurality of line filling buffers to store the request data corresponding to different line filling requests into the high-speed memory, but since each line filling buffer occupies a larger CPU area, the plurality of line filling buffers occupy a larger CPU area, which results in a larger overall CPU area.
To solve the above problems, embodiments of the present application provide a processor that increases the number of corresponding smaller-area line request buffers (Line Request Buffer, LRB) to replace the line fill buffers by decreasing the number of line fill buffers.
For example, the processor mentioned in the embodiment of the present application includes one line filling buffer and one line request buffer.
It can be understood that the area of the line request buffer is far smaller than that of the line filling buffer, and the line request buffer has the function of sending data reading requests consistent with that of the line filling buffer, so in the embodiment of the application, the line request buffer with a smaller area is adopted to replace part of the line filling buffer, so that the area of a processor can be reduced, and the number of data reading requests which can be simultaneously sent by the processor can be ensured not to be reduced.
It will be appreciated that each data read request from the processor to the external memory will correspond to a set of data returned from the external memory, but since the line request buffer has only the function of sending data read requests consistent with the line fill buffer, the function of receiving returned data from the line fill buffer is not provided, i.e. the line request buffer cannot receive returned data, and each line fill buffer can only receive and process a set of data at a time.
Therefore, when the number of data read requests that can be simultaneously sent by such a processor is greater than the number of line-filling buffers, there may be a return data return in which all line-filling buffers are occupied, resulting in a failure to receive the return data from the line-filling buffers, and a deadlock problem for the processor.
For example, in the embodiment of the present application, the initial two line filling buffers are reduced to one line filling buffer, and one line request buffer is added to replace one line filling buffer, that is, the processor mentioned in the embodiment of the present application includes one line filling buffer and one line request buffer. The scheme may occur that the processor line filling buffer and the line request buffer both send one request out, i.e. send two requests in total; when the request data corresponding to the first request is returned, the line filling buffer memory can receive the request data, but when the request data corresponding to the second request is returned, the line filling buffer memory still occupies the line filling buffer memory because the line request buffer memory cannot receive the data, and the data corresponding to the second request cannot be received, so that the deadlock problem occurs to the processor.
In order to solve the above problems, an embodiment of the present application provides a data processing method, which is applied to the above processor, and includes: after detecting a data read request (i.e., a linefill request), the processor determines whether the current linefill buffer and the eviction buffer are both in a non-idle state, and if so, temporarily stores the data read request, i.e., temporarily does not send the data read request to the external memory. And detecting states of the line filling buffer and the eviction buffer in real time until at least one of the line filling buffer and the eviction buffer is in an idle state, and sending a data reading request to an external memory.
It can be appreciated that in the embodiment of the present application, the states of the line filling buffer and the eviction buffer may be detected before the data reading request is sent, so when both the line filling buffer and the eviction buffer are in a non-idle state, it is determined that the line filling buffer cannot receive data, and when the temporary data reading request is not sent to the outside, the request data may not be returned, and the problem of deadlock that occurs when the return data cannot be received after the data reading request is sent may be effectively avoided.
It will be appreciated that when either the eviction buffer or the linefill buffer is in an idle state, the linefill buffer may be implemented to receive the request data corresponding to the current data read request. So that a data read request can be sent to the outside of the processor at this time.
For example, if the linefill buffer is in an idle state, the linefill buffer must be able to receive the request data corresponding to the current data read request.
If the linefill buffer is not in the idle state and the eviction buffer is in the idle state, it may be inferred that the linefill memory may be in the data corresponding to the previous request just received, and because the eviction buffer is in the idle state at this time, if the existing data of the request corresponding to the previous request at the location to be stored in the cache memory needs to be evicted, the eviction buffer may directly evict the existing data in the location to be stored, so that the linefill memory may send the data corresponding to the previous request to the cache memory for corresponding storage, and the occupation of the linefill memory is relieved, so that the linefill memory is in an idle state, and may receive the requested data.
If the existing data of the request to be stored in the cache memory corresponding to the previous request does not need to be evicted, the linefill memory sends the data corresponding to the previous request to the cache memory to cover the existing data of the position to be stored, and the linefill memory can be in an idle state and can receive the request data.
Therefore, if the line filling buffer is not in the idle state, and the eviction buffer is in the idle state, no matter whether the previous request and the existing data on the to-be-stored position where the request data corresponding to the current request are located need to be evicted, the request data can be received when the request data is returned.
It may be appreciated that, in the embodiment of the present application, before the data reading request is sent, the processor may determine, according to the state of the currently stored data, whether the currently stored data needs to be evicted from the cache memory, according to the to-be-stored location of the request data corresponding to the data reading request. If the currently stored data needs to be evicted from the cache, an eviction application is issued to the eviction cache. If not, then there is no need to issue an eviction application to the eviction cache.
It will be appreciated that in some embodiments, the eviction buffer executes the eviction instruction in accordance with the priority of the eviction application. The priority of the eviction application corresponding to the sent data reading request is greater than the priority of the eviction application corresponding to the unsent data reading request.
The following describes the structure of the processor according to the present application, and as shown in fig. 3, an embodiment of the present application provides a schematic structural diagram of the processor.
The processor includes a linefill Buffer, a linerequest Buffer, cache and eviction Buffer and AXI bus interface, a Store Buffer (STB), an eviction-finite state mechanism unit (Eviction, fine-STATE MACHINE, EV-FSM), and a control unit.
The line filling buffer is used for sending a data reading request to the AXI bus interface, receiving request data returned by the AXI bus and sending the request data to a corresponding position of the cache memory.
For example, as shown in fig. 3, the linefill buffer may receive a Store request from the Store buffer and a Load request from a Load/Store Unit (LSU), and a request from a PREF, respectively, corresponding to the linefill request, and send the corresponding linefill request to the AXI bus interface. And the linefill buffer may receive the requested data, e.g., CL2, from the AXI bus interface and send CL2 to the corresponding location of the cache, e.g., where data CL1 is located.
It will be appreciated that embodiments of the present application may include a linefill buffer.
The line request buffer is used for sending a data read request to the AXI bus interface. It will be appreciated that a line request buffer may be included in embodiments of the present application.
For example, as shown in fig. 3, the line request buffer may receive a Store request from the Store buffer and a Load request from a Load/Store Unit (LSU), and a request from a PREF, respectively, corresponding to the line fill request, and send the corresponding line fill request to the AXI bus interface.
The cache memory is used for storing the request data sent by the line request buffer.
For example, as shown in fig. 3, a cache memory may be used to store the request data CL2 sent by the line request buffer.
The eviction buffer is used for evicting the data of the corresponding position in the cache memory outside the processor according to the actual requirement.
For example, as shown in fig. 3, the eviction buffer is configured to evict the existing data CL1 to the outside of the processor when it is determined that the existing data CL1 on the to-be-stored location of the request data CL2 corresponding to the linefill request needs to be evicted data.
The EV-FSM is used to detect the state of the eviction buffer, e.g., idle state and non-idle state, and to send the state of the eviction buffer to the control unit.
The AXI bus interface is used for sending the data reading request to the external AHB interface through an AAB channel between the AXI bus interface and the AHB interface. And receives the request data from the AHB and sends the request data from the AHB to the linefill buffer. It is understood that the AAB channels may be single channels or dual channels.
The control unit may be control logic for performing the cache line filling method of the present application, and the control unit is configured to perform the data processing method mentioned in the embodiment of the present application.
The application provides an electronic device comprising a memory and a processor as mentioned in the embodiments of the application.
The following description of the data processing method according to the embodiments of the present application will be described with reference to the above-mentioned processor, and as shown in fig. 4, a schematic diagram of a data processing method according to an embodiment of the present application, where the data processing method shown in fig. 4 may be executed by the processor, and in some embodiments, may be executed by a control unit of the processor. As shown in fig. 4, the data processing method includes:
401: a data read request is detected.
It will be appreciated that in embodiments of the present application, the data read request may also be referred to as a linefill request. The linefill request may be a store request from a store buffer and a load request from a load/store unit, and a request from a PREF, respectively corresponding to the linefill request.
402: The state of the current linefill buffer and the eviction buffer is obtained.
It will be appreciated that in embodiments of the present application, the processor may determine the state of the eviction buffer via the EV-FSM. And the processor may directly determine the state of the linefill buffer. Wherein the states of the linefill buffer and the eviction buffer may both include an idle state and a non-idle state.
403: Judging whether the states of the line filling buffer and the eviction buffer are both non-idle states, if so, turning to 404, and temporarily storing a data reading request; if the result is negative, go to 405 and send a data read request.
It can be appreciated that in the embodiment of the present application, the states of the line filling buffer and the eviction buffer are detected before the data reading request is sent, and when the line filling buffer and the eviction buffer are both in the non-idle state, the data reading request is temporarily stored, so that the problem of deadlock that the returned data cannot be received if the data reading request is sent can be effectively avoided.
It will be appreciated that when either the eviction buffer or the linefill buffer is in an idle state, the linefill buffer may be implemented to receive the request data corresponding to the current data read request. So that a data read request can be sent to the outside of the processor at this time.
For example, if the linefill buffer is in an idle state, the linefill buffer must be able to receive the request data corresponding to the current data read request.
If the linefill buffer is not in the idle state and the eviction buffer is in the idle state, it may be inferred that the linefill memory may be in the data corresponding to the previous request just received, and because the eviction buffer is in the idle state at this time, if the existing data of the request corresponding to the previous request at the location to be stored in the cache memory needs to be evicted, the eviction buffer may directly evict the existing data in the location to be stored, so that the linefill memory sends the data corresponding to the previous request to the cache memory for corresponding storage, and makes the linefill memory in an idle state, and may receive the requested data.
If the existing data of the request to be stored in the cache memory corresponding to the previous request does not need to be evicted, the linefill memory sends the data corresponding to the previous request to the cache memory to cover the existing data of the position to be stored, and the linefill memory is in an idle state, so that the request data can be received.
404: Temporary storage data reading request.
In the embodiment of the application, when the processor determines that the states of the filling buffer and the eviction buffer are both non-idle states, the processor temporarily does not send a data reading request to the external memory. And detecting states of the line filling buffer and the eviction buffer in real time until at least one of the line filling buffer and the eviction buffer is in an idle state, and sending a data reading request to an external memory.
405: A data read request is sent.
It will be appreciated that in the embodiment of the present application, the data read request may be sent to the AXI bus interface through the line fill buffer, the line fill buffer may send the data read request to the AXI bus interface, and the data read request may be sent to the external memory through the AXI bus interface.
It can be appreciated that in the embodiment of the present application, the states of the line filling buffer and the eviction buffer may be detected before the data reading request is sent, so when both the line filling buffer and the eviction buffer are in a non-idle state, it is determined that the line filling buffer cannot receive data, and when the temporary data reading request is not sent to the outside, the request data may not be returned, and the problem of deadlock that occurs when the return data cannot be received after the data reading request is sent may be effectively avoided.
According to the data reading method provided by the embodiment of the application, the line request buffer with smaller area can be used for replacing part of the line filling buffer in the processor, so that the area of the processor can be reduced, and the number of data reading requests which can be simultaneously sent by the processor can be prevented from being reduced.
It will be appreciated that in the embodiment of the present application, before the data read request is sent, the processor may determine whether the currently stored data of the to-be-stored location needs to be evicted from the cache memory according to the state of the currently stored data. If the currently stored data needs to be evicted from the cache, an eviction application is issued to the eviction cache. If not, then there is no need to issue an eviction application to the eviction cache.
The method for determining whether the current stored data needs to be evicted from the cache memory by the processor according to the state of the current stored data of the to-be-stored position may be: if the currently stored data is modified data, determining that the currently stored data is data which is processed by the processor and needs to be sent to the outside, and determining that the currently stored data needs to be evicted from the cache memory. If the currently stored data is the original data which is not modified, the currently stored data is determined to be the data which needs to be covered by the new data, the currently stored data does not need to be evicted from the cache memory, and the data can be directly covered after the subsequent new data arrives at the cache memory. If the data is not stored in the position to be stored, it is determined that the eviction application does not need to be sent to the eviction buffer.
For example, as shown in fig. 2, if the request data is CL1, and it is determined that the to-be-stored location set in the cache memory by the request data CL1 is the first cache line 001, and it is determined that the data CL3 stored in the first cache line 001 is modified data, it is determined that the data CL3 is data processed by the processor and required to be sent to the outside, and it is determined that the current stored data needs to be evicted from the cache memory. The processor may issue an eviction request to the eviction cache at this time to cause the eviction cache to evict data CL3 from the cache, i.e., to cache data CL3 from the cache to the eviction cache, and send data CL3 to a corresponding location external to the processor via the AXI bus interface via the eviction cache.
It will be appreciated that in some embodiments, the eviction buffer executes the eviction instruction in accordance with the priority of the eviction application. The priority of the eviction application corresponding to the sent data reading request is greater than the priority of the eviction application corresponding to the unsent data reading request.
In some embodiments, when the channel AAB between the AXI bus interface and the AHB is a single channel, after the processor detects the data read request, it may first determine whether the eviction buffer in the processor is in an idle state; if the eviction register is in an idle state, a data read request is sent to the external memory.
If the eviction buffer is in a non-idle state, i.e. the eviction buffer is evicting data, the data storage request is temporarily stored at this time until the eviction buffer is in an idle state, and a data reading request is sent. Therefore, the problem of deadlock caused by that the expelling data and the returning data are simultaneously in the channel when the data channel is a single channel when the external memory returns the data after the request is sent to the outside can be effectively avoided.
It can be understood that the processor provided in the embodiment of the present application may be used in various electronic devices, and the electronic device provided in the embodiment of the present application will be described below by taking the mobile phone 1 as an example.
The embodiment of the application provides electronic equipment, which comprises a memory and a processor, wherein a data transmission channel between the processor and the memory is a single channel, and the processor comprises a line filling buffer, a line request buffer, an eviction buffer and a control unit; the line filling buffer and the line request buffer are used for sending a data reading request; a control unit for detecting a data read request; the control unit is used for judging the working state of the current eviction buffer; the control unit is used for temporarily storing the data reading request corresponding to the condition that the working state of the eviction buffer is a first type state, wherein the first type state is a non-idle state; and controlling the line filling buffer and the line request buffer to send a data reading request to the memory under the condition that the working state of the expelling buffer is in a second type state, wherein the second type state is in an idle state.
An embodiment of the present application provides an electronic device, including: the memory is used for storing instructions executed by one or more processors of the electronic device, and the processor is one of the one or more processors of the electronic device and is used for executing the data processing method according to the embodiment of the application.
The embodiment of the application provides a readable storage medium, wherein instructions are stored on the readable storage medium, and when the instructions are executed on electronic equipment, the instructions cause the electronic equipment to execute the data processing method mentioned in the embodiment of the application.
An embodiment of the present application provides a computer program product, including execution instructions that, when executed on an electronic device, cause the electronic device to perform the data processing method mentioned in the embodiment of the present application.
As shown in fig. 5, the mobile phone 10 may include a processor 110, a power module 140, a memory 180, a mobile communication module 130, a wireless communication module 120, a sensor module 190, an audio module 150, a camera 170, an interface module 160, keys 101, a display 102, and the like.
It should be understood that the illustrated structure of the embodiment of the present application is not limited to the specific configuration of the mobile phone 10. In other embodiments of the application, the handset 10 may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 110 may be the above-mentioned processor in the embodiment of the present application. The processor 110 may include one or more processing units, for example, processing modules or processing circuits that may include a central processing unit (Central Processing Unit, CPU), an image processor (Graphics Processing Unit, GPU), a digital signal processor DSP, a microprocessor (Micro-programmed Control Unit, MCU), an artificial intelligence (ARTIFICIAL INTELLIGENCE, AI) processor, or a programmable logic device (Field Programmable GATE ARRAY, FPGA), or the like. Wherein the different processing units may be separate devices or may be integrated in one or more processors. A memory unit may be provided in the processor 110 for storing instructions and data. In some embodiments, the storage unit in the processor 110 is a cache 180.
The processor 110 may perform the data processing methods mentioned in the embodiments of the present application.
The power module 140 may include a power source, a power management component, and the like. The power source may be a battery. The power management component is used for managing the charging of the power supply and the power supply supplying of the power supply to other modules. In some embodiments, the power management component includes a charge management module and a power management module. The charging management module is used for receiving charging input from the charger; the power management module is used for connecting a power supply, and the charging management module is connected with the processor 110. The power management module receives input from the power and/or charge management module and provides power to the processor 110, the display 102, the camera 170, the wireless communication module 120, and the like.
The mobile communication module 130 may include, but is not limited to, an antenna, a power amplifier, a filter, an LNA (Low noise amplify, low noise amplifier), and the like. The mobile communication module 130 may provide a solution for wireless communication including 2G/3G/4G/5G, etc. applied to the handset 10. The mobile communication module 130 may receive electromagnetic waves from an antenna, perform processes such as filtering, amplifying, and the like on the received electromagnetic waves, and transmit the processed electromagnetic waves to a modem processor for demodulation. The mobile communication module 130 may amplify the signal modulated by the modem processor, and convert the signal into electromagnetic waves through the antenna to radiate. In some embodiments, at least some of the functional modules of the mobile communication module 130 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 130 may be disposed in the same device as at least some of the modules of the processor 110. Wireless communication technologies may include the global system for mobile communications (global system for mobile communications, GSM), general packet radio service (GENERAL PACKET radio service, GPRS), code division multiple access (code division multiple access, CDMA), wideband code division multiple access (wideband code division multiple access, WCDMA), Time division code division multiple access (time-division code division multiple access, TD-SCDMA), long term evolution (long term evolution, LTE), bluetooth (BT), global navigation satellite system (global navigation SATELLITE SYSTEM, GNSS), wireless local area network (wireless local area networks, WLAN), short range wireless communication technology (NEAR FIELD communication, NFC), frequency modulation (frequency modulation, FM), infrared (IR) technology, and the like. The GNSS may include a global satellite positioning system (global positioning system, GPS), a global navigation satellite system (global navigation SATELLITE SYSTEM, GLONASS), a beidou satellite navigation system (beidou navigation SATELLITE SYSTEM, BDS), a quasi zenith satellite system (quasi-zenith SATELLITE SYSTEM, QZSS) and/or a satellite based augmentation system (SATELLITE BASED AUGMENTATION SYSTEMS, SBAS).
The wireless communication module 120 may include an antenna, and transmit and receive electromagnetic waves via the antenna. The wireless communication module 120 may provide solutions for wireless communication including wireless local area network (wireless local area networks, WLAN) (e.g., wireless fidelity (WIRELESS FIDELITY, wi-Fi) network), bluetooth (BT), global navigation satellite system (global navigation SATELLITE SYSTEM, GNSS), frequency modulation (frequency modulation, FM), near field communication (NEAR FIELD communication, NFC), infrared (IR), etc., applied to the handset 10. The handset 10 may communicate with a network and other devices via wireless communication technology.
In some embodiments, the mobile communication module 130 and the wireless communication module 120 of the handset 10 may also be located in the same module.
The display screen 102 is used for displaying human-computer interaction interfaces, images, videos, and the like. The display screen 102 includes a display panel. The display panel may employ a Liquid Crystal Display (LCD) CRYSTAL DISPLAY, an organic light-emitting diode (OLED), an active-matrix organic LIGHT EMITTING diode (AMOLED), a flexible light-emitting diode (FLED), miniled, microLed, micro-oLed, a quantum dot LIGHT EMITTING diode (QLED), or the like.
The sensor module 190 may include a proximity light sensor, a pressure sensor, a gyroscope sensor, a barometric sensor, a magnetic sensor, an acceleration sensor, a distance sensor, a fingerprint sensor, a temperature sensor, a touch sensor, an ambient light sensor, a bone conduction sensor, and the like.
The audio module 150 is used to convert digital audio information into an analog audio signal output, or to convert an analog audio input into a digital audio signal. The audio module 150 may also be used to encode and decode audio signals. In some embodiments, the audio module 150 may be disposed in the processor 110, or some functional modules of the audio module 150 may be disposed in the processor 110. In some embodiments, the audio module 150 may include a speaker, an earpiece, a microphone, and an earphone interface.
The camera 170 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image onto the photosensitive element. The photosensitive element converts the optical signal into an electrical signal, and then transfers the electrical signal to an ISP (IMAGE SIGNAL Processing) to convert into a digital image signal. The handset 10 may implement shooting functions through an ISP, a camera 170, a video codec, a GPU (Graphic Processing Unit, graphics processor), a display 102, an application processor, and the like.
The interface module 160 includes an external memory interface, a universal serial bus (universal serial bus, USB) interface, a subscriber identity module (subscriber identification module, SIM) card interface, and the like. Wherein the external memory interface may be used to connect an external memory card, such as a Micro SD card, to extend the memory capabilities of the handset 10. The external memory card communicates with the processor 110 through an external memory interface to implement data storage functions. The universal serial bus interface is used for communication between the handset 10 and other electronic devices. The subscriber identity module card interface is used to communicate with a SIM card mounted to the handset 1010, for example, by reading a telephone number stored in the SIM card or by writing a telephone number to the SIM card.
In some embodiments, the handset 10 further includes keys 101, motors, indicators, and the like. The key 101 may include a volume key, an on/off key, and the like. The motor is used to generate a vibration effect on the mobile phone 10, for example, when the mobile phone 10 of the user is called, so as to prompt the user to answer the call from the mobile phone 10. The indicators may include laser indicators, radio frequency indicators, LED indicators, and the like.
Embodiments of the present disclosure may be implemented in hardware, software, firmware, or a combination of these implementations. Embodiments of the application may be implemented as a computer program or program code that is executed on a programmable system comprising at least one processor, a storage system (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device.
Program code may be applied to input instructions to perform the functions described herein and generate output information. The output information may be applied to one or more output devices in a known manner. For the purposes of this application, a processing system includes any system having a processor such as, for example, a Digital Signal Processor (DSP), a microcontroller, an Application Specific Integrated Circuit (ASIC), or a microprocessor.
The program code may be implemented in a high level procedural or object oriented programming language to communicate with a processing system. Program code may also be implemented in assembly or machine language, if desired. Indeed, the mechanisms described in the present application are not limited in scope by any particular programming language. In either case, the language may be a compiled or interpreted language.
In some cases, the disclosed embodiments may be implemented in hardware, firmware, software, or any combination thereof. The disclosed embodiments may also be implemented as instructions carried by or stored on one or more transitory or non-transitory machine-readable (e.g., computer-readable) storage media, which may be read and executed by one or more processors. For example, the instructions may be distributed over a network or through other computer readable media. Thus, a machine-readable medium may include any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computer), including but not limited to floppy diskettes, optical disks, read-only memories (CD-ROMs), magneto-optical disks, read-only memories (ROMs), random Access Memories (RAMs), erasable programmable read-only memories (EPROMs), electrically erasable programmable read-only memories (EEPROMs), magnetic or optical cards, flash memory, or tangible machine-readable memory for transmitting information (e.g., carrier waves, infrared signal digital signals, etc.) in an electrical, optical, acoustical or other form of propagated signal using the internet. Thus, a machine-readable medium includes any type of machine-readable medium suitable for storing or transmitting electronic instructions or information in a form readable by a machine (e.g., a computer).
In the drawings, some structural or methodological features may be shown in a particular arrangement and/or order. However, it should be understood that such a particular arrangement and/or ordering may not be required. Rather, in some embodiments, these features may be arranged in a different manner and/or order than shown in the illustrative figures. Additionally, the inclusion of structural or methodological features in a particular figure is not meant to imply that such features are required in all embodiments, and in some embodiments, may not be included or may be combined with other features.
It should be noted that, in the embodiments of the present application, each unit/module mentioned in each device is a logic unit/module, and in physical terms, one logic unit/module may be one physical unit/module, or may be a part of one physical unit/module, or may be implemented by a combination of multiple physical units/modules, where the physical implementation manner of the logic unit/module itself is not the most important, and the combination of functions implemented by the logic unit/module is only a key for solving the technical problem posed by the present application. Furthermore, in order to highlight the innovative part of the present application, the above-described device embodiments of the present application do not introduce units/modules that are less closely related to solving the technical problems posed by the present application, which does not indicate that the above-described device embodiments do not have other units/modules.
It should be noted that in the examples and descriptions of this patent, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
While the application has been shown and described with reference to certain preferred embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the scope of the application.

Claims (12)

1. A data processing method, characterized in that it is used in an electronic device, the electronic device comprising a processor and a memory, the processor comprising a linefill buffer, a linerequest buffer, and an eviction buffer; the method comprises the following steps:
the processor detects a data read request;
the processor judges the current working states of the line filling buffer and the eviction buffer;
corresponding to the situation that the working states of the line filling buffer and the eviction buffer are both a first type state, the processor temporarily stores the data reading request, wherein the first type state is a non-idle state;
and the processor sends the data reading request to the memory corresponding to the condition that the working state of the line filling buffer or the eviction buffer is a second type state, wherein the second type state is an idle state.
2. A data processing method according to claim 1, wherein the processor comprises a cache memory, the method comprising:
the processor determines a first to-be-stored position of return data corresponding to the data reading request in the cache memory;
the processor acquires the state of first data currently stored in the first position to be stored;
when the state corresponding to the first data is the modified state, the processor sends an eviction application corresponding to the data reading request to the eviction buffer;
And the eviction buffer receives the eviction application and evicts the first data from the processor.
3. The data processing method according to claim 2, further comprising the memory receiving the data read request and transmitting second data corresponding to the data read request to the linefill buffer;
The linefill buffer sends the second data to the first to-be-stored location of the cache.
4. A data processing method according to any one of claims 1 to 3, wherein the eviction buffer performs an eviction operation corresponding to each eviction application based on the priority of each eviction application corresponding to each data read request.
5. The method according to claim 4, wherein in each eviction application corresponding to each data read request, the priority of the eviction application corresponding to the data read request sent by the processor is higher than the priority of the eviction application corresponding to the data read request not sent by the processor.
6. A processor, comprising a linefill buffer, a linerequest buffer, an eviction buffer, and a control unit;
the line filling buffer and the line request buffer are used for sending a data reading request;
The control unit is used for detecting the data reading request;
the control unit is used for judging the current working states of the line filling buffer and the eviction buffer;
The control unit is configured to temporarily store the data read request corresponding to a case that working states of the line filling buffer and the eviction buffer are both a first type state, where the first type state is a non-idle state;
The control unit is configured to control the line filling buffer and the line request buffer to send the data reading request to a memory corresponding to a situation that a working state of the line filling buffer or the eviction buffer is a second type state, where the second type state is an idle state.
7. An electronic device comprising a memory and the processor of claim 6.
8. The data processing method is characterized by being used for electronic equipment, wherein the electronic equipment comprises a processor and a memory, a data transmission channel between the processor and the memory is a single channel, and the processor comprises a line filling buffer, a line request buffer and an eviction buffer; the method comprises the following steps:
the processor detects a data read request;
the processor judges the current working state of the eviction buffer;
Corresponding to the condition that the working state of the eviction buffer is a first type state, the processor temporarily stores the data reading request, wherein the first type state is a non-idle state;
And the processor sends the data reading request to the memory corresponding to the condition that the working state of the eviction buffer is a second type state, wherein the second type state is an idle state.
9. An electronic device, comprising a memory and a processor, wherein a data transmission channel between the processor and the memory is a single channel, and the processor comprises a line filling buffer, a line request buffer, an eviction buffer and a control unit;
the line filling buffer and the line request buffer are used for sending a data reading request;
The control unit is used for detecting the data reading request;
The control unit is used for judging the current working state of the eviction buffer;
The control unit is configured to temporarily store the data read request corresponding to a case that the working state of the eviction buffer is a first type state, where the first type state is a non-idle state;
And controlling the line filling buffer and the line request buffer to send the data reading request to the memory under the condition that the working state of the eviction buffer is a second type state, wherein the second type state is an idle state.
10. An electronic device, comprising: a memory for storing instructions for execution by one or more processors of the electronic device, and the processor, which is one of the one or more processors of the electronic device, for performing the data processing method of any of claims 1 to 5 or 8.
11. A readable storage medium having stored thereon instructions that, when executed on an electronic device, cause the electronic device to perform the data processing method of any of claims 1-5 or 8.
12. A computer program product comprising execution instructions which, when executed on an electronic device, cause the electronic device to perform the data processing method of any of claims 1-5 or 8.
CN202210897134.2A 2022-07-28 2022-07-28 Data processing method, processor and electronic equipment Active CN115129617B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210897134.2A CN115129617B (en) 2022-07-28 2022-07-28 Data processing method, processor and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210897134.2A CN115129617B (en) 2022-07-28 2022-07-28 Data processing method, processor and electronic equipment

Publications (2)

Publication Number Publication Date
CN115129617A CN115129617A (en) 2022-09-30
CN115129617B true CN115129617B (en) 2024-10-22

Family

ID=83385221

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210897134.2A Active CN115129617B (en) 2022-07-28 2022-07-28 Data processing method, processor and electronic equipment

Country Status (1)

Country Link
CN (1) CN115129617B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114528230A (en) * 2022-04-21 2022-05-24 飞腾信息技术有限公司 Cache data processing method and device and electronic equipment

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5659709A (en) * 1994-10-03 1997-08-19 Ast Research, Inc. Write-back and snoop write-back buffer to prevent deadlock and to enhance performance in an in-order protocol multiprocessing bus
US7082500B2 (en) * 2003-02-18 2006-07-25 Cray, Inc. Optimized high bandwidth cache coherence mechanism
US7882285B2 (en) * 2007-12-18 2011-02-01 International Business Machines Corporation Buffer cache management to prevent deadlocks
US8347035B2 (en) * 2008-12-18 2013-01-01 Intel Corporation Posting weakly ordered transactions

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114528230A (en) * 2022-04-21 2022-05-24 飞腾信息技术有限公司 Cache data processing method and device and electronic equipment

Also Published As

Publication number Publication date
CN115129617A (en) 2022-09-30

Similar Documents

Publication Publication Date Title
WO2021223539A1 (en) Radio frequency resource allocation method and apparatus
CN113986162A (en) Layer composition method, device and computer readable storage medium
EP4209906A1 (en) Memory management method, electronic device, and computer-readable storage medium
CN115757193B (en) Memory management method and electronic equipment
CN108696848B (en) Electronic device for short-range communication and coverage compensation method thereof
CN117130541B (en) Storage space configuration method and related equipment
CN110837343A (en) Snapshot processing method and device and terminal
CN116315667B (en) Data transmission method, device, equipment and storage medium
CN110069322B (en) Background application processing method, terminal equipment and storage medium
CN115129617B (en) Data processing method, processor and electronic equipment
CN115904297A (en) Screen display detection method, electronic device and storage medium
EP4513334A1 (en) Memory management method and electronic device
CN117076346B (en) Application program data processing method and device and electronic equipment
CN114489469B (en) A data reading method, electronic device and storage medium
WO2023051094A1 (en) Memory recovery method and apparatus, electronic device, and readable storage medium
WO2022170866A1 (en) Data transmission method and apparatus, and storage medium
CN116541102A (en) File preloading method and device, electronic equipment and readable storage medium
CN116418994A (en) Image coding method and device
CN113760191A (en) Data reading method, apparatus, storage medium and program product
CN113760192A (en) Data reading method, data reading apparatus, storage medium, and program product
CN114125353A (en) Calling method of video telephone, terminal device and storage medium
CN117707662B (en) Interface display method and electronic device
WO2022111711A1 (en) Inter-process communication method and apparatus
CN116661987B (en) Memory application method and electronic equipment
CN114035953A (en) Data processing method, electronic equipment, medium and system on chip

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant