CN112799723A - Data reading method and device and electronic equipment - Google Patents
Data reading method and device and electronic equipment Download PDFInfo
- Publication number
- CN112799723A CN112799723A CN202110399917.3A CN202110399917A CN112799723A CN 112799723 A CN112799723 A CN 112799723A CN 202110399917 A CN202110399917 A CN 202110399917A CN 112799723 A CN112799723 A CN 112799723A
- Authority
- CN
- China
- Prior art keywords
- access address
- host
- memory
- predicted
- address
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/30—Arrangements for executing machine instructions, e.g. instruction decode
- G06F9/30003—Arrangements for executing specific machine instructions
- G06F9/3004—Arrangements for executing specific machine instructions to perform operations on memory
- G06F9/30043—LOAD or STORE instructions; Clear instruction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0877—Cache access modes
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Memory System Of A Hierarchy Structure (AREA)
Abstract
The invention discloses a data reading method and device and electronic equipment, and relates to the technical field of electronics. The method is applied to an electronic device with a memory and a host, and comprises the following steps: under the condition that the system is in an idle state, determining a predicted access address corresponding to a last access address based on the last access address sent by a host; responding to a current access address sent by the host unit, and controlling the memory to feed back data to the host according to the predicted access address under the condition that the current access address is matched with the predicted access address; and controlling the host to send the current access address to the memory if the current access address and the predicted access address do not match. The device is used for executing the method of the technical scheme. The method provided by the invention is used for reading data, and can improve the accuracy of the instruction fetching of the host, thereby improving the overall performance and the processing efficiency of the electronic equipment.
Description
Technical Field
The present invention relates to the field of electronic technologies, and in particular, to a data reading method and apparatus, and an electronic device.
Background
Micro Control Unit (MCU) chips typically use Embedded flash (Embedded NOR flash) as instruction and data storage.
At present, the core operation dominant frequency of a common MCU chip is multiple times of the speed of the eFlash operation dominant frequency, so that the core needs to wait for a period of time before data can be obtained from a data output port of the eFlash when the core obtains an instruction from the eFlash.
The speed of the flash instruction-fetching speed determines the overall performance and efficiency of an embedded System chip (System-on-a-chip; SOC), and since the MCU chip needs to wait for a period of time before fetching data, the speed of the MCU chip reading data is low, and further, the overall performance and efficiency of the System chip are low.
Disclosure of Invention
The invention aims to provide a data reading method, a data reading device and electronic equipment, and aims to solve the problem that the overall performance and efficiency of a system chip are low due to the fact that the speed of reading data by an MCU chip is low.
In a first aspect, the present invention provides a data reading method applied to an electronic device having a memory and a host, the method including:
under the condition that the system is in an idle state, determining a predicted access address corresponding to a last access address based on the last access address sent by a host;
responding to a current access address sent by the host, and controlling the memory to feed back data to the host according to the predicted access address under the condition that the current access address is matched with the predicted access address;
and controlling the host to send the current access address to the memory if the current access address and the predicted access address do not match.
Under the condition of adopting the technical scheme, when the system suspends the access to the embedded flash memory device, namely when the system is in an idle state, the system can immediately and automatically generate the predicted access address based on the previous access address, and the host can be replaced to send the predicted access address at any time, so that the instruction fetching efficiency of the host can be improved. And under the condition that the current access address is matched with the predicted access address, the memory is controlled to feed data back to the host according to the predicted access address, and under the condition that the current access address is not matched with the predicted access address, the host is controlled to send the current access address to the memory, so that the accuracy of host instruction fetching can be improved, and the overall performance and the processing efficiency of the electronic equipment are improved.
In a possible implementation manner, the determining, based on a previous access address sent by a host, a predicted access address corresponding to the previous access address includes:
determining the next access address as the predicted access address under the condition that the previous access address and the corresponding next access address are the jump addresses of the previous access address;
and if the previous access address and the corresponding next access address are continuous addresses of the previous access address, generating continuous addresses based on the previous access address, and determining the continuous addresses as the predicted access addresses.
In a possible implementation manner, the determining, in a case where the previous access address and the corresponding next access address are jump addresses of the previous access address, the next access address as the predicted access address includes:
under the condition that the previous access address and the corresponding next access address are the jump address of the previous access address, determining the next access address corresponding to the previous access address based on the previous access address sent by the host and a preset corresponding relation;
determining the next access address as the predicted access address;
the preset corresponding relation comprises a corresponding relation between the previous access address and the corresponding predicted access address; and the preset corresponding relation is the corresponding relation between the last access address and the predicted access address fed back by the memory.
In a possible implementation manner, after the controlling the host sends the current access address to the memory, the method further includes:
and controlling the memory to feed back data to the host according to the current access address.
In a possible implementation manner, after controlling the memory to feed back data to the host according to the current access address or after controlling the memory to feed back data to the host according to the predicted access address, the method further includes:
updating the preset corresponding relation according to the current access address and the corresponding predicted access address;
the preset corresponding relation comprises a corresponding relation between the current access address and the corresponding predicted access address; and the preset corresponding relation is the corresponding relation between the current access address and the predicted access address fed back by the memory.
In one possible implementation, the controlling the host to send the current access address to the memory in the case that the current access address and the predicted access address do not match includes:
controlling the memory to empty cache data when the current access address and the predicted access address do not match;
controlling the host to send the current access address to the memory.
In a second aspect, the present invention further provides a data reading apparatus, which is applied to an electronic device having a memory and a host, a processor and a communication interface coupled to the processor; the processor is configured to execute a computer program or instructions to implement the data reading method according to any one of the claims.
The beneficial effects of the data reading apparatus provided by the second aspect are the same as the beneficial effects of the data reading method described in the first aspect or any possible implementation manner of the first aspect, and are not described herein again.
In a third aspect, the present invention further provides an electronic device, which includes a processor, a memory, and a program or a specification stored in the memory and executable on the processor, where the program or the instruction, when executed by the processor, implements the data reading method according to any one of the first aspects.
The beneficial effects of the electronic device provided by the third aspect are the same as the beneficial effects of the data reading method described in the first aspect or any possible implementation manner of the first aspect, and are not described herein again.
In a fourth aspect, the present invention further provides a computer storage medium, where instructions are stored, and when the instructions are executed, the data reading method described in the first aspect or any possible implementation manner of the first aspect is implemented.
The beneficial effects of the computer storage medium provided by the fourth aspect are the same as the beneficial effects of the data reading method described in the first aspect or any possible implementation manner of the first aspect, and are not described herein again.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the invention and not to limit the invention. In the drawings:
fig. 1 is a schematic flow chart illustrating a data reading method according to an embodiment of the present invention;
FIG. 2 is a flow chart of another data reading method according to an embodiment of the present invention;
FIG. 3 is a schematic diagram illustrating a data read waveform according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of another data reading waveform provided by the embodiment of the invention;
FIG. 5 is a diagram illustrating a branch lookup table according to an embodiment of the present invention;
FIG. 6 is a diagram illustrating a scenario for determining a predicted access address according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of a data read waveform provided by an embodiment of the present invention;
fig. 8 is a schematic diagram illustrating a cache structure and a principle according to an embodiment of the present invention;
fig. 9 is a schematic view illustrating a scenario of a storage structure and an update mode of a branch lookup table according to an embodiment of the present invention;
FIG. 10 is a schematic diagram of a data reading circuit according to an embodiment of the present invention;
fig. 11 is a schematic structural diagram of a data reading apparatus according to an embodiment of the present invention;
fig. 12 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present invention;
fig. 13 is a schematic structural diagram of a chip according to an embodiment of the present invention.
Detailed Description
In order to facilitate clear description of technical solutions of the embodiments of the present invention, in the embodiments of the present invention, terms such as "first" and "second" are used to distinguish the same items or similar items having substantially the same functions and actions. For example, the first threshold and the second threshold are only used for distinguishing different thresholds, and the sequence order of the thresholds is not limited. Those skilled in the art will appreciate that the terms "first," "second," etc. do not denote any order or quantity, nor do the terms "first," "second," etc. denote any order or importance.
It is to be understood that the terms "exemplary" or "such as" are used herein to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "e.g.," is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
In the present invention, "at least one" means one or more, "a plurality" means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone, wherein A and B can be singular or plural. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. "at least one of the following" or similar expressions refer to any combination of these items, including any combination of the singular or plural items. For example, at least one (one) of a, b, or c, may represent: a, b, c, a and b combination, a and c combination, b and c combination, or a, b and c combination, wherein a, b and c can be single or multiple.
An Embedded flash memory (eFlash) is generally used as an instruction and data storage for a Micro Control Unit (MCU) chip, and the chip using the Embedded flash memory has the advantages that a program stored in the Embedded flash memory can be directly run, when a Central Processing Unit (CPU) accesses the eFlash, data on the eFlash can be run for a while after receiving an access request, and the performance of the eFlash can be improved only by a few methods such as process improvement, and the instruction fetching speed of the eFlash gradually becomes a main factor affecting the electronic device where the chip is located with the improvement of the process.
At present, the core operation dominant frequency of a common MCU chip is multiple times of the speed of the eFlash operation dominant frequency, so that the core needs to wait for a period of time before data can be obtained from a data output port of the eFlash when the core obtains an instruction from the eFlash. Specifically, when the MCU chip fetches instructions from the eFlash, each bus request needs to wait for a period of time before data can be fetched from the data output port of the eFlash, typical eFlash read access time average read latency (TRC) is not equal to 20 nanoseconds (ns) to 40 nanoseconds, but since the MCU control circuit can only perform clock edge operations, the concept of the number of latency cycles is introduced, the number of latency cycles of the bus clock: (the number of latency cycles of the bus clock: (TRC) (ns)) is introducedLatency) Clock frequency (f) And read latency (TRC) needs to satisfy the following relationship:
that is, the number of the waiting cycles is greater than or equal to the product of the clock frequency and the read latency, wherein the number of the waiting cycles is a natural number.
The speed of the pointing speed of the eFlash determines the overall performance and efficiency of an embedded System chip (SOC). The speed of reading data by the eFlash is low due to the fact that data can be fetched after waiting for a period of time, and further the overall performance and efficiency of the SOC are low.
The embodiment of the invention provides a data reading method which can be applied to electronic equipment with a memory and a host. Fig. 1 shows a schematic flowchart of a data reading method according to an embodiment of the present invention, and as shown in fig. 1, the data reading method includes:
step 101: and under the condition that the system is in an idle state, determining a predicted access address corresponding to the last access address based on the last access address sent by the host.
In this application, the host may be a CPU, a Direct Memory Access (DMA), a Serial Wire Debug (SWD) controller, and the like, which is not specifically limited in this embodiment of the present invention. The memory may be an Embedded NOR flash (eFlash) memory.
When the embedded flash memory device is in an idle state after reading is finished, the embedded flash memory device is kept in a working state in the continuous reading process of data, when a system suspends accessing the embedded flash memory device, namely when the system is in the idle state, a predicted access address can be immediately and automatically generated based on the previous access address, and the predicted access address can be sent at any time instead of a host, so that the instruction fetching efficiency of the host can be improved.
When the system is in an idle state, that is, the system does not currently receive the access signal sent by the host, at this time, the electronic device may obtain a previous access address sent by the host, and determine, based on the previous access address, a predicted access address corresponding to the previous access address.
In the case where the system is in the idle state, after determining the predicted access address corresponding to the last access address based on the last access address sent by the host, step 102 or step 103 is executed.
Step 102: and responding to the current access address sent by the host, and controlling the memory to feed back data to the host according to the predicted access address under the condition that the current access address is matched with the predicted access address.
The current access address is matched with the predicted access address, namely the predicted access address is predicted correctly, and the predicted access address is the same as the current access address.
Step 103: and in the case that the current access address and the predicted access address do not match, the control host sends the current access address to the memory.
In this application, the current access address and the predicted access address do not match, that is, the predicted access address is incorrect, and at this time, the electronic device may control the host to send the current access address to the memory again.
In the embodiment of the invention, when the system suspends the access to the embedded flash memory device, namely when the system is in an idle state, the predicted access address can be immediately and automatically generated based on the previous access address, and the predicted access address can be sent by the host instead of the host at any time, so that the instruction fetching efficiency of the host can be improved. And under the condition that the current access address is matched with the predicted access address, the memory is controlled to feed back data to the host according to the predicted access address, and under the condition that the current access address is not matched with the predicted access address, the host is controlled to send the current access address to the memory, so that the accuracy of host instruction fetching can be improved, and the overall performance and the processing efficiency of the electronic equipment are improved.
Fig. 2 is a schematic flowchart illustrating another data reading method according to an embodiment of the present invention, where the data reading method is applied to an electronic device having a memory and a host, and as shown in fig. 2, the data reading method includes:
step 201: and under the condition that the system is in an idle state, determining the next access address as a predicted access address under the condition that the previous access address and the corresponding next access address are the jump addresses of the previous access address.
And the predicted access address is a next access address corresponding to the historical previous access address.
The branch exists between the previous access address and the corresponding next access address, which means that the previous access address and the corresponding next access address are not continuous and a jump address exists.
In this application, the host may be a CPU, a DMA, or an SWD controller, and the like, which is not specifically limited in this embodiment of the present invention. The memory may be an Embedded NOR flash (eFlash) memory.
When the embedded flash memory device is in an idle state after reading is finished, the embedded flash memory device is kept in a working state in the continuous reading process of data, when a system suspends accessing the embedded flash memory device, namely when the system is in the idle state, a predicted access address can be immediately and automatically generated based on the previous access address, and the predicted access address can be sent at any time instead of a host, so that the instruction fetching efficiency of the host can be improved.
It should be noted that, when the system suspends the access to the embedded flash memory device, if the electronic device receives a new access request from the host during the process of generating the predicted access address based on the previous access address, and the new access request matches the predicted access address, the predicted access address can be returned to the host, which, in this scenario, is equivalent to reducing the number of cycles, if the process of generating the predicted access address based on the last access address has been completed, the host has not yet initiated a new access request, the electronic device obtains the predicted access address and stores the predicted access address and the corresponding related data into a cache, in the case where the electronic device detects that the cache space is insufficient, an access request of the host has not been received, the electronic device controls to shut down the process that generated the predicted access address based on the last access address, stopping the operation of the memory.
Optionally, the electronic device may receive a control signal, a current address signal, and a data signal of the AHB bus, decode and convert the current address signal into a current access signal corresponding to the memory, and decode and convert the control signal into a command signal corresponding to the memory.
In the present application, a specific implementation manner of the step 201 may include the following sub-steps:
substep A1: and under the condition that the previous access address and the corresponding next access address are the jump addresses of the previous access address, determining the next access address corresponding to the previous access address based on the previous access address sent by the host and a preset corresponding relation.
The preset corresponding relation comprises a corresponding relation between a previous access address and a corresponding predicted access address; the preset corresponding relation is the corresponding relation between the last access address and the predicted access address fed back by the memory. The preset correspondence may be stored in the electronic device as a branch lookup table in a table form.
Substep A2: the next access address is determined to be the predicted access address.
Therefore, in the present application, when a jump execution exists between a previous access address and a corresponding next access address, that is, when a branch exists, the electronic device may determine the next access address corresponding to the previous access address based on the previous access address and a preset corresponding relationship, which may improve a hit rate of a generated predicted access address and improve execution efficiency of the program.
Optionally, the preset corresponding relationship may be dynamically updated according to Least Recently Used (LRU), First In Last Out (FILO), First In First Out (FIFO), and other algorithms.
In the application, a host interface and an eFlash interface corresponding to a host can be determined according to the host condition and the eFlash device condition, the host interface is mainly used for performing signal conversion, decoding and the like according to a protocol supported by the host, and the eFlash interface is mainly used for communicating with eFlash according to a time sequence required by the eFlash device.
For example, the host may adopt a cortix MO CPU, and the host interface corresponding to the host may select two sets of AHBs to be respectively responsible for reading the control signal, the current address signal, and the data signal, where the AHB address and data bit width is about 32 bits, and the bus clock frequency is 64 mhz (MH clock frequency)Z) The period is 15.625 nanoseconds, the maximum access time of the eFlash is 30 nanoseconds, andconsidering factors such as logic delay, transmission delay, signal establishment and retention time, etc., the access cycle of the eFlash interface can be set to 3 clock cycles, and when the size of the eFlash device is 64 Kilobytes (KiB), the bit width is 32 bits, that is, the valid bit of the address is 14 bits, the host interface needs to be controlled to truncate the 32-bit address to the 14-bit address.
For example, assuming that the host continuously transmits the current access addresses 0x0400_0004, 0x0400_001c and repeatedly transmits the current access addresses at intervals for a total of four transmissions, for a scenario where there is no corresponding predicted access address determined based on the last access address, see fig. 3, a total of 16 clock cycles are required to complete the four transmissions.
For example, assuming that the address selects the corresponding predetermined corresponding relationship, which may be a branch lookup table, and the depth of the branch lookup table is 8, the LRU algorithm is used to update the predetermined corresponding relationship, where the predetermined corresponding relationship includes a one-to-one correspondence relationship between jump addresses and original addresses, that is, a corresponding relationship between a previous access address and a predicted access address, and assuming that the host continuously sends the current access address 0x0400_0004 and 0x0400_001c, and repeatedly sends the current access address at intervals, the transmission is performed for four times, the first transmission (0 x 4) is normally accessed, see fig. 4, the predicted access address 0x08 is generated for pre-reading before the subsequent host access is valid, and the actual access is 0x1c and 0x8 are compared to be unequal, it is considered that the predicted access address is not matched and the predetermined corresponding relationship is updated, the predicted access address 0x20 is generated for pre-reading before the subsequent host access is valid, and 0x4 re-accessing, re-predicting that the access addresses are not matched, re-updating the preset corresponding relation, predicting that the access addresses are 0x1c for pre-reading before the subsequent host accesses are effective, wherein the actual access is 0x1c at the moment, and directly waiting for the eFlash to finish reading if the predicted access addresses are matched. Eventually requiring a total of 15 clock cycles to complete. If the predicted access address and the current access address initially match, only 12 cycles are required in total. Referring to fig. 5, a schematic diagram of a branch lookup table provided by an embodiment of the present invention is shown, as shown in fig. 5, in the case that the current access address is {1C,04} and the corresponding predicted access address in the branch lookup table is {0, d4}, directly outputting the relevant data in {1C,04 }.
For example, fig. 6 is a schematic diagram illustrating a scenario for determining a predicted access address according to an embodiment of the present invention, as shown in fig. 6, the scenario includes step C1: in the case that the previous access address and the corresponding next access address are the jump address of the previous access address, that is, the previous access address matches the storage address in the preset correspondence, the jump address Y may be output, and step C2: and if the previous access address and the corresponding next access address are the continuous addresses of the previous access address, generating the continuous addresses based on the previous access address, and determining the continuous addresses as the predicted access addresses. The jump address refers to a next access address, namely a predicted access address, which is determined as the jump address when the previous access address and the corresponding next access address are determined as the jump addresses of the previous access address, and the next access address corresponding to the current access address is stored in the preset corresponding relationship, wherein the current access address in the preset corresponding relationship is also a historical access address.
In the case where the previous access address and the corresponding next access address are the jump addresses of the previous access address, step 203 is executed after the next access address is determined to be the predicted access address.
Step 202: and in the case that the previous access address and the corresponding next access address are continuous addresses of the previous access address, generating continuous addresses based on the previous access address, and determining the continuous addresses as predicted access addresses.
In this application, when there is no jump address between the previous access address and the corresponding next access address, that is, there is no branch, the electronic device generates a consecutive address corresponding to the previous access address based on the previous access address, and determines the consecutive address as the predicted access address.
If the previous access address and the corresponding next access address are consecutive addresses of the previous access address, the consecutive addresses are generated based on the previous access address, and after the consecutive addresses are determined as the predicted access addresses, step 203 is performed.
Step 203: and responding to the current access address sent by the host, and controlling the memory to feed back data to the host according to the predicted access address under the condition that the current access address is matched with the predicted access address.
The current access address is matched with the predicted access address, namely the predicted access address is predicted correctly, and the predicted access address is the same as the current access address.
The predicted access address and the data corresponding to the feedback are stored in the cache, for example, the cache may use a form of 32-bit data + 16-bit address as one cache region, four cache regions may be set, each cache region may be cleared immediately after being used, assuming that the host continuously sends the current access address 0x0400_0004 and 0x0400_001c, and repeatedly sends the current access address at an interval of time for four times, and the predicted access address has branches, and the corresponding relationship thereof is stored in the preset corresponding relationship, and the current access address and the predicted access address are matched, referring to fig. 7, the data reading may be completed in only 4 cycles, and the host may immediately obtain the data from the cache corresponding to the preset corresponding relationship.
In summary, the data reading method provided in the embodiments of the present invention can reduce the number of cycles required to read each data in the process of reading data from the memory to be 1 at the minimum, and the maximum time is not greater than the theoretical shortest time for reading data, so that the data reading speed can be increased by about 18% at an operating frequency of 64 mhz.
After controlling the memory to feed back data to the host based on the predicted access address, step 206 is performed.
Step 204: and in the case that the current access address and the predicted access address do not match, the control host sends the current access address to the memory.
In this application, the current access address and the predicted access address are not matched, that is, the predicted access address is incorrect, at this time, the electronic device may control the host to send the current access address to the memory again, and the specific implementation process of the step 204 may include the following sub-steps:
substep B1: and controlling the memory to empty the cache data under the condition that the current access address and the predicted access address do not match.
The single cache in the cache data is also emptied according to the usage times of the single cache, the updating of the memory content and other scenes, wherein the updating of the memory content refers to emptying the historical content according to a first-in first-out sequence.
In the application, the cache data can be emptied through a first-in first-out (FIFO) cache structure, the cache data can also be emptied through a height (cache) cache structure, and each single cache can be independently controlled by the cache structure.
For example, fig. 8 shows a schematic diagram of a cache structure and a principle provided by an embodiment of the present invention, as shown in fig. 8, after receiving a current access address, it is determined whether an address in a preset corresponding relationship matches the address, if an equal address exists, cache data corresponding to the address is output, and if an equal address does not exist, a clock cycle is waited to obtain data from the eFlash. The preset corresponding relation stores the corresponding relation between addresses and cache data, and each address corresponds to corresponding cache data.
Substep B2: and the control host sends the current access address to the memory.
Under the condition that the current access address and the predicted access address are not matched, the last prediction is failed, at the moment, the data are not in the predicted data corresponding to the predicted access address, the electronic device can control the host to send the current access address to the memory again, control the memory to generate related control signals strictly according to the interface timing requirement of the memory, and wait for the data response of the memory.
In the case that the current access address and the predicted access address do not match, step 205 is executed after the controlling host sends the current access address to the memory.
Step 205: and controlling the memory to feed back data to the host according to the current access address.
The electronic device may control the host to transmit the current access address again, and the memory determines data corresponding to the current access address in response to the current access address and transmits the data to the host.
After controlling the memory to feed back data to the host according to the current access address, step 206 is performed.
Step 206: and updating the preset corresponding relation according to the current access address and the corresponding predicted access address.
The preset corresponding relation comprises the corresponding relation between the current access address and the corresponding predicted access address; the preset corresponding relation is the corresponding relation between the current access address and the predicted access address fed back by the memory.
In this application, the electronic device may include a register set, an algorithm module, and a pointer register, and when the current access address is received, the algorithm module may generate an update enable signal and a pointer signal based on the current access address and a state of the register set corresponding to the current access address, and the pointer register may control to update the preset corresponding relationship based on the update enable signal and the pointer signal, which may be, for example, a branch lookup table in which the control update packet includes the preset corresponding relationship.
For example, fig. 9 is a schematic view illustrating a scenario of a branch lookup table storage structure and an update manner according to an embodiment of the present invention, as shown in fig. 9, when a previous access address matches a current access address, an electronic device may generate an update enable signal and a pointer signal based on states of register sets corresponding to the current access address and the current access address, a pointer register may control to update a branch lookup table based on the update enable signal and the pointer signal, and the branch lookup table may include a plurality of sets, respectively: and a group of preset corresponding relations consisting of other data information, the jump address and the original address. The jump address refers to a next access address, that is, a jump address can be determined as a predicted access address, when the previous access address and the corresponding next access address are determined as the jump address of the previous access address.
The group of register groups can be formed by splicing information such as a current access address, data read by the current access address, a predicted access address, the use times of the cache, whether the cache is occupied and the like, and a main body with a preset corresponding relation is formed by a plurality of groups of the group of register groups.
For example, fig. 10 shows a schematic diagram of a data reading circuit according to an embodiment of the present invention, and as shown in fig. 10, the electronic device may include a host interface 01, and a preset correspondence unit 02, a cache unit 03, an eFlash interface 04, a decoding and distributing address unit 05, a matching unit 06, a comparison logic unit 07, a determination unit 08, and a request determination unit 09, which are sequentially connected to the host interface 01. Wherein, the host interface 01 is used for receiving data and the like; the preset corresponding relation unit 02 is used for determining a predicted access address corresponding to a previous access address based on the previous access address sent by the host; the cache unit 03 is used for storing preset corresponding relations; the eFlash interface 04 is used for communicating with eFlash according to the time sequence required by the eFlash device; the decoding and distributing address unit 05 is used for signal conversion, decoding and the like according to a protocol supported by the host; the matching unit 06 is used for judging whether the current access address and the predicted access address are matched; the comparison logic unit 07 is used for controlling the memory to feed back data to the host according to the predicted access address when the current access address is matched with the predicted access address, and is also used for controlling the host to send the current access address to the memory when the current access address is not matched with the predicted access address; the determining unit 08 is configured to determine, based on the previous access address sent by the host and a preset corresponding relationship, a next access address corresponding to the previous access address when the previous access address and the corresponding next access address are the jump address of the previous access address; the system is also used for generating continuous addresses based on the previous access address and determining the continuous addresses as predicted access addresses under the condition that the previous access address and the corresponding next access address are continuous addresses of the previous access address; the request determining unit 09 is configured to, if the process of generating the predicted access address based on the previous access address is completed and the host does not yet initiate a new access request, the electronic device obtains the predicted access address and stores the predicted access address and the corresponding related data into the cache.
In the embodiment of the invention, when the system suspends the access to the embedded flash memory device, namely when the system is in an idle state, the predicted access address can be immediately and automatically generated based on the previous access address, and the predicted access address can be sent by the host instead of the host at any time, so that the instruction fetching efficiency of the host can be improved. And under the condition that the current access address is matched with the predicted access address, the memory is controlled to feed back data to the host according to the predicted access address, and under the condition that the current access address is not matched with the predicted access address, the host is controlled to send the current access address to the memory, so that the accuracy of host instruction fetching can be improved, and the overall performance and the processing efficiency of the electronic equipment are improved.
Fig. 11 is a schematic structural diagram of a data reading apparatus according to an embodiment of the present invention, where the data reading apparatus can be applied to an electronic device having a memory and a host, as shown in fig. 11, the data reading apparatus 300 includes:
an address determining module 301, configured to determine, when the system is in an idle state, a predicted access address corresponding to a previous access address based on the previous access address sent by the host;
a first feedback module 302, configured to respond to a current access address sent by a host, and control a memory to feed back data to the host according to a predicted access address when the current access address matches the predicted access address;
and the control module 303 is configured to control the host to send the current access address to the memory if the current access address does not match the predicted access address.
Optionally, the address determining module includes:
the first determining submodule is used for determining the next access address as a predicted access address under the condition that the previous access address and the corresponding next access address are the jump addresses of the previous access address;
and the second determining submodule is used for generating continuous addresses based on the previous access address and determining the continuous addresses as predicted access addresses under the condition that the previous access address and the corresponding next access address are continuous addresses of the previous access address.
Optionally, the first determining sub-module includes:
the first determining unit is used for determining a next access address corresponding to a previous access address based on the previous access address sent by the host and a preset corresponding relation under the condition that the previous access address and the corresponding next access address are jump addresses of the previous access address;
a second determination unit configured to determine a next access address as a predicted access address;
the preset corresponding relation comprises the corresponding relation between the previous access address and the corresponding predicted access address; the preset corresponding relation is the corresponding relation between the last access address and the predicted access address fed back by the memory.
Optionally, the data reading apparatus further includes:
and the second feedback module is used for controlling the memory to feed back data to the host according to the current access address.
Optionally, the data reading apparatus further includes:
the updating module is used for updating the preset corresponding relation according to the current access address and the corresponding predicted access address;
the preset corresponding relation comprises the corresponding relation between the current access address and the corresponding predicted access address; the preset corresponding relation is the corresponding relation between the current access address and the predicted access address fed back by the memory.
Optionally, the sending module includes:
the cache clearing submodule is used for controlling the memory to clear cache data under the condition that the current access address is not matched with the predicted access address;
and the sending submodule is used for controlling the host to send the current access address to the memory.
In the embodiment of the invention, when the system suspends the access to the embedded flash memory device, namely when the system is in an idle state, the predicted access address can be immediately and automatically generated based on the previous access address, and the predicted access address can be sent by the host instead of the host at any time, so that the instruction fetching efficiency of the host can be improved. And under the condition that the current access address is matched with the predicted access address, the memory is controlled to feed back data to the host according to the predicted access address, and under the condition that the current access address is not matched with the predicted access address, the host is controlled to send the current access address to the memory, so that the accuracy of host instruction fetching can be improved, and the overall performance and the processing efficiency of the electronic equipment are improved.
The invention also provides a data reading device, which is applied to an electronic device with a memory and a host, and the device comprises: a processor and a communication interface coupled to the processor; the processor is configured to run a computer program or an instruction to implement each process implemented by the data reading method in the method embodiments of fig. 1 to fig. 10, and details are not repeated here to avoid repetition.
The data reading device in the embodiment of the present invention may be a device, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiment of the present invention is not particularly limited.
The data reading device in the embodiment of the present invention may be a device having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present invention are not limited in particular.
Fig. 12 is a schematic diagram illustrating a hardware structure of an electronic device according to an embodiment of the present invention. As shown in fig. 12, the electronic device 400 includes a processor 410.
As shown in fig. 12, the processor 410 may be a general processing unit (CPU), a microprocessor, an application-specific integrated circuit (ASIC), or one or more ics for controlling the execution of programs according to the present invention.
As shown in fig. 12, the electronic device 400 may further include a communication line 440. Communication link 440 may include a path for transmitting information between the aforementioned components.
Optionally, as shown in fig. 12, the electronic device may further include a communication interface 420. The communication interface 420 may be one or more. Communication interface 420 may use any transceiver or the like for communicating with other devices or a communication network.
Optionally, as shown in fig. 12, the electronic device may further include a memory 430. The memory 430 is used to store computer-executable instructions for performing aspects of the present invention and is controlled for execution by the processor. The processor is used for executing the computer execution instructions stored in the memory, thereby realizing the method provided by the embodiment of the invention.
As shown in fig. 12, the memory 430 may be a read-only memory (ROM) or other types of static storage devices that can store static information and instructions, a Random Access Memory (RAM) or other types of dynamic storage devices that can store information and instructions, an electrically erasable programmable read-only memory (EEPROM), a compact disc read-only memory (CD-ROM) or other optical disc storage, optical disc storage (including compact disc, laser disc, optical disc, digital versatile disc, blu-ray disc, etc.), magnetic disk storage media or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to such. The memory 430 may be separate and coupled to the processor 410 via a communication link 440. The memory 430 may also be integrated with the processor 410.
Optionally, the computer-executable instructions in the embodiment of the present invention may also be referred to as application program codes, which is not specifically limited in this embodiment of the present invention.
In particular implementations, as one embodiment, processor 410 may include one or more CPUs, such as CPU0 and CPU1 in fig. 12, as shown in fig. 12.
In a specific implementation, as an embodiment, as shown in fig. 12, the terminal device may include a plurality of processors, such as the first processor 4101 and the second processor 4102 in fig. 12. Each of these processors may be a single core processor or a multi-core processor.
Fig. 13 is a schematic structural diagram of a chip according to an embodiment of the present invention. As shown in fig. 13, the chip 500 includes one or more than two (including two) processors 410.
Optionally, as shown in fig. 13, the chip further includes a communication interface 420 and a memory 430, and the memory 430 may include a read-only memory and a random access memory and provide operating instructions and data to the processor. The portion of memory may also include non-volatile random access memory (NVRAM).
In some embodiments, as shown in FIG. 13, memory 430 stores elements, execution modules or data structures, or a subset thereof, or an expanded set thereof.
In the embodiment of the present invention, as shown in fig. 9, by calling an operation instruction stored in the memory (the operation instruction may be stored in the operating system), a corresponding operation is performed.
As shown in fig. 13, the processor 410 controls the processing operation of any one of the terminal devices, and the processor 410 may also be referred to as a Central Processing Unit (CPU).
As shown in fig. 13, memory 430 may include both read-only memory and random access memory, and provides instructions and data to the processor. A portion of the memory 430 may also include NVRAM. For example, in applications where the memory, communication interface, and memory are coupled together by a bus system that may include a power bus, a control bus, a status signal bus, etc., in addition to a data bus. For clarity of illustration, however, the various buses are labeled as bus system 540 in FIG. 13.
As shown in fig. 13, the method disclosed in the above embodiment of the present invention can be applied to a processor, or implemented by a processor. The processor may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in a processor or instructions in the form of software. The processor may be a general purpose processor, a Digital Signal Processor (DSP), an ASIC, an FPGA (field-programmable gate array) or other programmable logic device, discrete gate or transistor logic device, or discrete hardware components. The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in a memory, and a processor reads information in the memory and completes the steps of the method in combination with hardware of the processor.
In one aspect, a computer-readable storage medium is provided, in which instructions are stored, and when executed, the instructions implement the functions performed by the terminal device in the above embodiments.
In one aspect, a chip is provided, where the chip is applied to a terminal device, and the chip includes at least one processor and a communication interface, where the communication interface is coupled to the at least one processor, and the processor is configured to execute instructions to implement the functions performed by … in the foregoing embodiments.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer programs or instructions. When the computer program or instructions are loaded and executed on a computer, the procedures or functions described in the embodiments of the present invention are performed in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, a terminal, a user device, or other programmable apparatus. The computer program or instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another computer readable storage medium, for example, the computer program or instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by wire or wirelessly. The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that integrates one or more available media. The usable medium may be a magnetic medium, such as a floppy disk, a hard disk, a magnetic tape; or optical media such as Digital Video Disks (DVDs); it may also be a semiconductor medium, such as a Solid State Drive (SSD).
While the invention has been described in connection with various embodiments, other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed invention, from a review of the drawings, the disclosure, and the appended claims. In the claims, the word "comprising" does not exclude other elements or steps, and the word "a" or "an" does not exclude a plurality. A single processor or other unit may fulfill the functions of several items recited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.
While the invention has been described in conjunction with specific features and embodiments thereof, it will be evident that various modifications and combinations can be made thereto without departing from the spirit and scope of the invention. Accordingly, the specification and figures are merely exemplary of the invention as defined in the appended claims and are intended to cover any and all modifications, variations, combinations, or equivalents within the scope of the invention. It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.
Claims (8)
1. A data reading method is applied to an electronic device with a memory and a host, and the method comprises the following steps:
under the condition that the system is in an idle state, determining a predicted access address corresponding to a last access address based on the last access address sent by a host;
responding to a current access address sent by the host, and controlling the memory to feed back data to the host according to the predicted access address under the condition that the current access address is matched with the predicted access address;
and controlling the host to send the current access address to the memory if the current access address and the predicted access address do not match.
2. The method of claim 1, wherein determining the predicted access address corresponding to the previous access address based on the previous access address sent by the host comprises:
determining the next access address as the predicted access address under the condition that the previous access address and the corresponding next access address are the jump addresses of the previous access address;
and if the previous access address and the corresponding next access address are continuous addresses of the previous access address, generating continuous addresses based on the previous access address, and determining the continuous addresses as the predicted access addresses.
3. The method of claim 2, wherein determining the next access address as the predicted access address comprises:
determining the next access address corresponding to the previous access address based on the previous access address and a preset corresponding relation; determining the next access address as the predicted access address;
the preset corresponding relation comprises a corresponding relation between the access address and the corresponding predicted access address; and the preset corresponding relation is the corresponding relation between the access address and the predicted access address fed back by the memory.
4. The method of claim 1, wherein after controlling the host to send the current access address to the memory, the method further comprises:
and controlling the memory to feed back data to the host according to the current access address.
5. The method of claim 4, wherein after controlling the memory to feed back data to the host according to the current access address or after controlling the memory to feed back data to the host according to the predicted access address, the method further comprises:
updating the preset corresponding relation according to the current access address and the corresponding predicted access address;
the preset corresponding relation comprises a corresponding relation between the current access address and the corresponding predicted access address; and the preset corresponding relation is the corresponding relation between the current access address and the predicted access address fed back by the memory.
6. The method according to any one of claims 1 to 5, wherein the controlling the host to send the current access address to the memory in the case that the current access address and the predicted access address do not match comprises:
controlling the memory to empty cache data when the current access address and the predicted access address do not match;
controlling the host to send the current access address to the memory.
7. A data reading apparatus, applied to an electronic device having a memory and a host, the apparatus comprising: a processor and a communication interface coupled to the processor; the processor is used for running a computer program or instructions to implement the data reading method according to any one of claims 1 to 6.
8. An electronic device comprising a processor, a memory, and a program or instructions stored on the memory and executable on the processor, the program or instructions when executed by the processor implementing the data reading method of any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110399917.3A CN112799723A (en) | 2021-04-14 | 2021-04-14 | Data reading method and device and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110399917.3A CN112799723A (en) | 2021-04-14 | 2021-04-14 | Data reading method and device and electronic equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112799723A true CN112799723A (en) | 2021-05-14 |
Family
ID=75811389
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110399917.3A Pending CN112799723A (en) | 2021-04-14 | 2021-04-14 | Data reading method and device and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112799723A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114328312A (en) * | 2022-03-08 | 2022-04-12 | 深圳市航顺芯片技术研发有限公司 | Data processing method, computer device and readable storage medium |
CN115114190A (en) * | 2022-07-20 | 2022-09-27 | 上海合见工业软件集团有限公司 | SRAM data reading system based on prediction logic |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060107024A1 (en) * | 2004-11-18 | 2006-05-18 | Sun Microsystems, Inc. | Mechanism and method for determining stack distance of running software |
CN105051684A (en) * | 2013-03-14 | 2015-11-11 | 桑迪士克科技股份有限公司 | System and method for predicting and improving boot-up sequence |
CN107885530A (en) * | 2016-11-14 | 2018-04-06 | 上海兆芯集成电路有限公司 | Submit the method and instruction cache of cache line |
CN109947667A (en) * | 2017-12-21 | 2019-06-28 | 华为技术有限公司 | Data access prediction method and apparatus |
CN111651120A (en) * | 2020-04-28 | 2020-09-11 | 中国科学院微电子研究所 | Method and apparatus for prefetching data |
-
2021
- 2021-04-14 CN CN202110399917.3A patent/CN112799723A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060107024A1 (en) * | 2004-11-18 | 2006-05-18 | Sun Microsystems, Inc. | Mechanism and method for determining stack distance of running software |
CN105051684A (en) * | 2013-03-14 | 2015-11-11 | 桑迪士克科技股份有限公司 | System and method for predicting and improving boot-up sequence |
CN107885530A (en) * | 2016-11-14 | 2018-04-06 | 上海兆芯集成电路有限公司 | Submit the method and instruction cache of cache line |
CN109947667A (en) * | 2017-12-21 | 2019-06-28 | 华为技术有限公司 | Data access prediction method and apparatus |
CN111651120A (en) * | 2020-04-28 | 2020-09-11 | 中国科学院微电子研究所 | Method and apparatus for prefetching data |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114328312A (en) * | 2022-03-08 | 2022-04-12 | 深圳市航顺芯片技术研发有限公司 | Data processing method, computer device and readable storage medium |
CN115114190A (en) * | 2022-07-20 | 2022-09-27 | 上海合见工业软件集团有限公司 | SRAM data reading system based on prediction logic |
CN115114190B (en) * | 2022-07-20 | 2023-02-07 | 上海合见工业软件集团有限公司 | SRAM data reading system based on prediction logic |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111651384B (en) | Register reading and writing method, chip, subsystem, register set and terminal | |
US6810444B2 (en) | Memory system allowing fast operation of processor while using flash memory incapable of random access | |
CN108228498B (en) | DMA control device and image processor | |
CN114817965B (en) | High-speed encryption and decryption system and method for implementing MSI interrupt processing based on multi-algorithm IP core | |
CN114662136A (en) | A high-speed encryption and decryption system and method of multi-algorithm IP core based on PCIE channel | |
US7299341B2 (en) | Embedded system with instruction prefetching device, and method for fetching instructions in embedded systems | |
CN112799723A (en) | Data reading method and device and electronic equipment | |
CN115905046B (en) | Network card driving data packet processing method and device, electronic equipment and storage medium | |
US20230267079A1 (en) | Processing apparatus, method and system for executing data processing on a plurality of channels | |
US6581119B1 (en) | Interrupt controller and a microcomputer incorporating this controller | |
CN118349286B (en) | Processor, instruction processing device, electronic equipment and instruction processing method | |
KR20170081275A (en) | Reconfigurable fetch pipeline | |
US20060265532A1 (en) | System and method for generating bus requests in advance based on speculation states | |
US6738837B1 (en) | Digital system with split transaction memory access | |
CN114721975A (en) | Chain table processing method and device, accelerator, circuit board, equipment and storage medium | |
TW200417914A (en) | Interrupt-processing system for shortening interrupt latency in microprocessor | |
CN116467235B (en) | DMA-based data processing method and device, electronic equipment and medium | |
US7185122B2 (en) | Device and method for controlling data transfer | |
KR20040067063A (en) | The low power consumption cache memory device of a digital signal processor and the control method of the cache memory device | |
KR102462578B1 (en) | Interrupt controller using peripheral device information prefetch and interrupt handling method using the same | |
CN107807888B (en) | Data prefetching system and method for SOC architecture | |
CN111506530A (en) | Interrupt management system and management method thereof | |
KR102260820B1 (en) | Symmetrical interface-based interrupt signal processing device and method | |
US12039294B2 (en) | Device and method for handling programming language function | |
CN117389915B (en) | Cache system, read command scheduling method, system on chip and electronic device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210514 |