Storage equipment
Technical Field
The present application relates to memory technology, and in particular, to a data processing system having multiple data paths and a virtual electronic device constructed using the multiple data paths.
Background
In some applications, the processor handles large-scale concurrent tasks. Such as an embedded processor for a network device, storage device, or the like, processes multiple network packets or IO commands concurrently.
In the desktop CPU and the server CPU, the operating system schedules a plurality of processes and/or threads running on the CPU to process tasks in a time-sliced and/or preemptive manner by running the operating system, so that a user does not need to excessively intervene in switching between the processes/threads. The appropriate process/thread is selected by the operating system to schedule to take full advantage of the CPU computing power. However, in embedded CPUs, the available memory, CPU processing power, etc. resources are limited and any handled has specificity, such as being a relatively simple task of massive concurrency. And some embedded systems have strict requirements on performance, especially task processing delay, and the operating systems of the prior art are difficult to adapt to the scene.
To improve the performance of any process, a task is typically divided into multiple phases (or sub-tasks), with each phase being processed sequentially for a single task and concurrently for multiple tasks.
Signal-slot based task scheduling schemes are provided in chinese patent applications 201811095364.7, 201811160925.7, and 2019102538859 to handle a large number of concurrent IO commands and to guarantee the overall quality of service of multiple IO commands.
FIG. 1A is a schematic diagram of task scheduling.
In fig. 1A, the direction from left to right is the direction of time lapse. Also shown are a plurality of tasks (1-1, 2-1, 3-1, 1-2, 2-2 and 3-2) being processed, wherein in the reference numerals of structure "a-b", the preceding symbol a indicates a task and the following symbol b indicates a subtask included in the task. Fig. 1A shows 3 tasks processed in time sequence, each task including 2 subtasks.
Solid arrows indicate the time sequence of processing a plurality of tasks, and dashed arrows indicate the logical sequence of task processing. For example, taking task 1 as an example, its subtask 1-1 (task 1-1) is processed first, and then its subtask 1-2 (task 1-2) is processed. Still by way of example, referring to FIG. 1A, after processing subtask 1-1, execution subtask 2-1 and subtask 3-1 are scheduled to improve parallelism of task processing, then it is identified that the conditions for executing subtask 1-2 are met, and after processing subtask 3-1, execution subtask 1-2 is scheduled.
On a processor, tasks (or sub-tasks) are processed by executing code segments. A single CPU (or CPU core) handles only a single task at any one time. Illustratively, as shown in FIG. 1A, for a plurality of tasks to be processed, a code segment of processing subtask 1-1 is executed first, a code segment of processing subtask 2-1 is executed next, a code segment of processing subtask 3-1 is executed next, a code segment of processing subtask 1-2 is executed next, a code segment of processing subtask 2-2 is executed next, and a code segment of processing subtask 3-2 is executed next. Optionally, the logical order of task processing is indicated in the code segments of the respective processing tasks (or sub-tasks). For example, the logical sequence includes sub-tasks 1-2 to be processed after task sub-1. As yet another example, a code segment whose logical order is post-processed is indicated in the code segment of processing sub-task 1-1 as the code segment of processing sub-task 1-2.
FIG. 1B is a block diagram of a task processing system.
Referring to FIG. 1B, the task processing system includes two parts, software and hardware. The hardware includes, for example, one or more CPUs running software, other hardware resources (e.g., memory, codec, interface, accelerator, interrupt controller, DMA unit, etc.) that handle related tasks.
The code segments of the software running on the CPU are referred to as task processing units. The task processing system includes a plurality of task processing units. Each task processing unit processes the same or different tasks. For example, task processing unit 0 processes a first subtask of a task (e.g., subtask 1-1, subtask 2-1, and subtask 3-1), while task processing unit 1, task processing unit 2, and task processing unit 3 process a second subtask of a task (e.g., subtask 1-2, subtask 2-2, and subtask 3-2).
The task processing system further comprises a software implemented task management unit for scheduling one of the task processing units to run on hardware.
Resources required by the task processing unit include, for example, a cache unit, a descriptor (or called context) of the processing task, and the like.
A storage device for providing storage capability for a host coupled thereto. The host and storage device may be coupled by a variety of means including, but not limited to, connecting the host to the storage device 102 via, for example, SATA (SER IA L ADVANCED Technology Attachment ), SCSI (small Computer system interface), SAS (SER IA L ATTACHED SCSI ), IDE (INTEGRATED DR IVE Electronics), USB (Universal Ser ia l Bus ), PCI E (PER IPHERA L Component I nterconnect Express, PCI E, peripheral component interconnect), NVMe (NVM Express, high speed nonvolatile storage), ethernet, fibre channel, wireless communication network, etc. The host may be an information processing device capable of communicating with the storage device in the manner described above, such as a personal computer, tablet, server, portable computer, network switch, router, cellular telephone, personal digital assistant, or the like. The storage device processes the IO command. IO commands include, for example, read commands, write commands, or other IO commands. The memory device includes an interface, a control unit, one or more NVM chips, and optionally a DRAM (Dynamic Random Access Memory ). The control component of the storage device includes one or more CPUs. The CPU runs software or firmware to process the IO command as a task processing unit.
The storage device also splits the IO command into one or more subcommands for processing. Each subcommand has a relatively uniform specification, e.g. accesses an address range of the same size, enabling the task processing unit handling the subcommand to be implemented in a simpler manner.
Disclosure of Invention
There is a need to provide storage devices of multiple sizes, for example with different storage capacities, different capabilities and/or differentiated functionality. Thus, different design versions are provided for different specifications of storage devices. These different designs have commonalities and desire to reuse existing technical achievements, to enable efficient assembly and interaction of components developed by multiple persons of a team, and to allow the design to be easily expanded to provide enhanced functionality and/or performance.
In order to solve the technical problem that different design versions are provided for storage devices with different specifications in the prior art, according to a first aspect of the present application, a method for constructing a first downlink data path according to the first aspect of the present application is provided, which includes creating at least one task processing unit, creating at least one channel, and associating the at least one task processing unit with the at least one channel.
According to the construction method of the first downlink data path of the first aspect of the application, the construction method of the second downlink data path of the first aspect of the application is provided, the task processing unit comprises an inbound interface, an outbound interface and a DTU processing module, the method further comprises the steps that the inbound interface acquires the DTU from a channel associated with the inbound interface, the outbound interface adds the DTU to the channel associated with the outbound interface, the DTU processing module extracts the subcommand from the DTU acquired through the inbound interface, processes the subcommand, and adds the DTU carrying the processed subcommand to the channel associated with the DTU through the outbound interface.
According to a construction method of a first or second downlink data path of a first aspect of the present application, there is provided a construction method of a third downlink data path of the first aspect of the present application, a channel comprising a DTU list and a plurality of functions operating the DTU list, the DTU list comprising a container containing one or more DTUs, the functions of the plurality of operating the DTU list comprising at least a first Push function and a Pop function, wherein at least one DTU is added to the DTU list when the first Push function is called, and at least one DTU is acquired from the DTU list when the Pop function is called.
According to the construction method of the third downlink data path of the first aspect of the present application, there is provided the construction method of the fourth downlink data path of the first aspect of the present application, the channel further comprises a direct forwarding unit, the functions of the plurality of operation DTU lists further comprise a second Push function, wherein in response to the second Push function being called, the direct forwarding unit provides the task processing unit with the DTU acquired by the second Push function.
According to the construction method of the fourth downlink data path of the first aspect of the present application, there is provided the construction method of the fifth downlink data path of the first aspect of the present application, wherein the creating of the at least one channel further comprises creating a direct forwarding unit of the channel and setting a destination index, wherein the direct forwarding unit calls a function associated with a task processing unit indicated by the destination index to provide a DTU acquired from a second Push function to the task processing unit.
According to one of the first to fifth downstream data path construction methods of the first aspect of the present application, there is provided the sixth downstream data path construction method according to the first aspect of the present application, wherein the channel acquires DTUs from one or more of the task processing units associated therewith, and provides DTUs to only one of the task processing units associated therewith.
According to one of the first to fourth downstream data path construction methods of the first aspect of the present application, there is provided the seventh downstream data path construction method according to the first aspect of the present application, and the DTU is a message unit carrying subcommand/or subcommand context.
According to one of the first to seventh downstream data path construction methods of the first aspect of the present application, there is provided the eighth downstream data path construction method according to the first aspect of the present application, the downstream communication further comprising at least one resource manager, the resource manager managing the use of the specified resources, the method further comprising associating at least one task processing unit with at least one resource manager, whereby the task manager accesses the resources through the resource manager associated therewith.
According to the eighth downstream data path construction method of the first aspect of the present application, there is provided the ninth downstream data path construction method according to the first aspect of the present application, further comprising associating one or more channels with one or more resource managers, wherein a monitoring function of the resource manager is registered with the channel, and in response to the channel being added with a DTU, the registered monitoring function is called to access resources managed by the resource manager corresponding to the called monitoring function.
According to the eighth or ninth downstream data path construction method of the first aspect of the present application, there is provided the tenth downstream data path construction method according to the first aspect of the present application, further comprising a resource manager managing allocation or reclamation of the specified resource, the resource manager managing a state of the specified resource.
According to one of the first to tenth downstream data paths of the first aspect of the present application, there is provided the eleventh downstream data path of the first aspect of the present application, wherein the number of channels is equal to or less than the number of task processing units.
According to one of the construction methods of the first to eleventh downstream data paths of the first aspect of the present application, there is provided the construction method of the twelfth downstream data path of the first aspect of the present application, the associating the at least one task processing unit with the at least one channel comprising setting one or more channels associated therewith for an inbound interface of each task processing unit of the downstream channel, and setting a channel associated therewith for an outbound interface of each task processing unit of the downstream channel.
According to one of the first to twelfth downstream data path constructing methods of the first aspect of the present application, there is provided the thirteenth downstream data path constructing method according to the first aspect of the present application, the task processing unit is a software unit that can be scheduled, and the channel cannot be scheduled.
According to one of the construction methods of the first to thirteenth downstream data paths of the first aspect of the present application, there is provided the construction method of the fourteenth downstream data path of the first aspect of the present application, wherein the at least one task processing unit is associated with the at least one channel, including associating one task processing unit with two or more channels, and/or associating one task processing unit with one channel.
According to one of the construction methods of the first to fourteenth downstream data paths of the first aspect of the present application, there is provided the construction method of the fifteenth downstream data path of the first aspect of the present application, the at least one task processing unit being created, comprising a first task processing unit and a second task processing unit, wherein the first task processing unit and the second task processing unit are task processing units that process the same or different tasks.
According to a fifteenth downstream data path construction method of the first aspect of the present application, there is provided a sixteenth downstream data path construction method of the first aspect of the present application, wherein the at least one task processing unit includes a first task processing unit that processes an address conversion task, a second task processing unit that processes a data assembly task, and a third task processing unit that processes a garbage collection task, the created at least one channel includes a first channel, a second channel, and a third channel, an inbound interface of the first task processing unit is associated with the first channel and the third channel, an outbound interface of the first task processing unit is associated with the second channel, and an outbound interface of the second task processing unit is associated with the third channel.
According to a sixteenth downstream data path constructing method of the first aspect of the present application, there is provided a seventeenth downstream data path constructing method according to the first aspect of the present application, wherein the first task processing unit is associated with a resource manager that manages address mapping table resources, the second task processing unit is associated with a resource manager that manages accelerator resources, and the third task processing unit is associated with a resource manager that manages storage medium resources.
According to one of the fifteenth to seventeenth downstream data path constructing methods of the first aspect of the present application, there is provided the eighteenth downstream data path constructing method of the first aspect of the present application, wherein the at least one task processing unit further includes a fourth task processing unit that processes an address conversion task, the created at least one channel further includes a fourth channel and a fifth channel, an inbound interface of the fourth task processing unit is associated with the fourth channel, an outbound interface of the fourth task processing unit is associated with the fifth channel, and an inbound interface of the second task processing unit is associated with the fifth channel.
According to the eighteenth downstream data path constructing method of the first aspect of the present application, there is provided the nineteenth downstream data path constructing method of the first aspect of the present application, wherein the created at least one channel further includes a sixth channel, the inbound interface of the fourth task processing unit is associated with the sixth channel, and the outbound interface of the third task processing unit is associated with the sixth channel.
According to one of the construction methods of the first to nineteenth downstream data paths of the first aspect of the present application, there is provided the construction method of the twentieth downstream data path of the first aspect of the present application, further comprising associating the downstream data path with a command transmission unit and a subcommand processing unit, wherein the command transmission unit splits the IO command into one or more subcommands, acquires the subcommand carried by the DTU, and delivers the DTU carrying the subcommand to the task processing unit coupled to itself, and the subcommand processing unit accesses the storage medium according to the subcommand carried by the DTU.
According to a twenty-first downstream data path construction method of the first aspect of the present application, there is provided a twenty-first downstream data path construction method of the first aspect of the present application, wherein a first task processing unit of at least one task processing unit is associated with the command transmission unit, the first task processing unit is directly connected with the command transmission unit or coupled through a channel, the first task processing unit is a task processing unit of the at least one task processing unit that processes a subcommand, a second task processing unit of the at least one task processing unit is associated with the subcommand processing unit, the second task processing unit is directly connected with the subcommand processing unit, and the second task processing unit is a task processing unit of the last processing subcommand of the at least one task processing unit.
According to a second aspect of the present application, there is provided a first information processing apparatus according to the second aspect of the present application, comprising a memory, a processor and a program stored on the memory and executable on the processor, the processor implementing the method according to any one of the above-mentioned first aspects when executing the program.
According to a third aspect of the present application, there is provided a construction apparatus for a first downlink data path according to the third aspect of the present application, including a first creation unit for creating at least one task processing unit, a second creation unit for creating at least one channel, and an association unit for associating the at least one task processing unit with the at least one channel.
According to a fourth aspect of the present application, there is provided a first downlink data path according to the fourth aspect of the present application, including at least one task processing unit and at least one channel, wherein a first task processing unit of the at least one task processing unit acquires a DTU from a channel associated with itself, the DTU carries a subcommand, the first task processing unit is any one of the at least one task processing unit, and a second task processing unit is one of the at least one task processing unit other than the first task processing unit, the first task processing unit processes the subcommand, and fills the DTU into the channel associated with itself after the subcommand processing is completed, so that the second task processing unit acquires the DTU from the channel.
According to a first downstream data path of a fourth aspect of the present application, there is provided a second downstream data path of the fourth aspect of the present application, wherein the first task processing unit acquires the DTU from a first channel associated with itself and fills the DTU into a second channel associated with itself after completion of subcommand processing so that the second task processing unit acquires the DTU from the second channel, or the first task processing unit acquires the DTU from a first channel associated with itself and fills the DTU into the first channel after completion of subcommand processing so that the second task processing unit acquires the DTU from the first channel, wherein the first channel and the second channel are both associated with the first task processing unit and the first channel and the second channel are different channels.
According to a fourth aspect of the present application, there is provided a first or second downstream data path, a third downstream data path, according to the fourth aspect of the present application, wherein the first task processing unit comprises an inbound interface, an outbound interface and a DTU processing module, the DTU processing module obtains the DTU from a channel associated with the first task processing unit through the inbound interface, the DTU processing module obtains a subcommand from the DTU, processes the subcommand and carries the processed subcommand on the DTU, and the DTU processing module adds the DTU to the channel associated with the first task processing unit through the outbound interface.
According to one of the first to third downstream data paths of the fourth aspect of the present application, there is provided a fourth downstream data path according to the fourth aspect of the present application, each of the at least one channel comprising a DTU list and a plurality of functions of an operation DTU list for accommodating a plurality of DTUs, the functions of the plurality of operation DTU lists comprising at least a first Push function and a Pop function, wherein the task processing unit adds at least one DTU to the DTU list by calling the first Push function, and the task processing unit acquires at least one DTU from the DTU list by calling the Pop function.
According to a fourth downstream data path of the fourth aspect of the present application, there is provided a fifth downstream data path of the fourth aspect of the present application, each channel further comprises a direct forwarding unit, the functions of the plurality of operation DTU lists further comprise a second Push function, and when the first task processing unit calls the second Push function, the direct forwarding unit obtains the DTU from the second Push function and provides the DTU to the second task processing unit.
According to one of the first to fifth downstream data paths of the fourth aspect of the present application, there is provided a sixth downstream data path according to the fourth aspect of the present application, the downstream data path further comprising at least one resource manager, the resource manager managing the use of specified resources, the at least one task processing unit being associated with the at least one resource manager such that the task processing unit accesses the resources through the resource manager associated therewith.
According to a sixth downstream data path of the fourth aspect of the present application, there is provided a seventh downstream data path according to the fourth aspect of the present application, the resource manager manages a plurality of types of resources including at least a cache resource, an address mapping table resource, a computing resource and/or a storage medium resource.
According to a seventh downstream data path of the fourth aspect of the present application, there is provided an eighth downstream data path according to the fourth aspect of the present application, one resource manager managing a plurality of types of resources, or each of a plurality of resource managers managing one type of resources, a different resource manager managing a different type of resources.
According to a seventh or eighth downstream data path of the fourth aspect of the present application, there is provided a ninth downstream data path according to the fourth aspect of the present application, the inbound interface of the task processing unit calls a Pop function of the channel associated with itself to obtain the DTU, and the outbound interface of the task processing unit calls a Push function of the channel associated with itself to add the DTU to the channel.
According to one of the first to ninth downstream data paths of the fourth aspect of the present application, there is provided a tenth downstream data path of the fourth aspect of the present application, the first task processing unit further comprising a callback function, the first task processing unit writing a callback function index indicating the callback function in the DTU before adding the DTU to a channel, wherein the callback function is called by the callback function index so that a resource manager is requested to release resources.
According to one of the first to tenth downstream data paths of the fourth aspect of the present application, there is provided an eleventh downstream data path according to the fourth aspect of the present application, wherein the plurality of task processing units are associated with a third channel, and if the plurality of task processing units call a first Push function of the third channel, a plurality of DTUs from the plurality of task processing units are added to a DTU list in the third channel, the third channel being any one of at least one channel.
According to one of the first to eleventh downstream data paths of the fourth aspect of the present application, there is provided a twelfth downstream data path according to the fourth aspect of the present application, a plurality of channels are associated with the first task processing unit, the first task processing unit acquires DTUs from the plurality of channels according to priorities of the plurality of channels, or the first task processing unit polls the plurality of channels to acquire DTUs from the plurality of channels.
According to one of the first to twelfth downstream data paths of the fourth aspect of the present application, there is provided a thirteenth downstream data path according to the fourth aspect of the present application, the downstream data path including a third task processing unit that processes a buffer task, a fourth task processing unit that processes an address conversion task, a fifth task processing unit that processes a data assembly task, and a sixth task processing unit that processes a garbage collection task, wherein the third task processing unit acquires a DTU from the third channel or the seventh channel, executes the buffer task, and adds the DTU carrying the execution result of the buffer task to the fourth channel, the fourth task processing unit acquires a DTU from the fourth channel, executes an address conversion task, and adds the DTU carrying the execution result of the address conversion task to the fifth channel, the fifth task processing unit acquires a DTU from the fifth channel, executes the data assembly task, and sends the DTU carrying the execution result to the fifth task processing unit, and acquires the garbage collection result carrying the DTU carrying the execution result of the DTU from the fifth task, and the fifth task processing unit acquires the DTU carrying the execution result of the buffer task.
According to a twelfth downstream data path of a fourth aspect of the present application, there is provided a fourteenth downstream data path according to the fourth aspect of the present application, the downstream data path including a third task processing unit, a fourth task processing unit, a fifth task processing unit, a sixth task processing unit, a seventh task processing unit, an eighth task processing unit, a plurality of channels, and a resource manager, the third task processing unit and the fourth task processing unit processing cache tasks, the fifth task processing unit and the sixth task processing unit processing address conversion tasks, the seventh task processing unit processing data assembly tasks, the eighth task processing unit processing garbage collection tasks; wherein the third task processing unit acquires a DTU from a third channel, executes a cache task and adds the DTU carrying the execution result of the cache task to a fifth channel, the fourth task processing unit acquires the DTU from a fourth channel, executes the cache task and adds the DTU carrying the execution result of the cache task to a sixth channel, the fifth task processing unit acquires the DTU from the fifth channel and/or the tenth channel, executes an address conversion task and adds the DTU carrying the execution result of the address conversion task to a seventh channel, the sixth task processing unit acquires the DTU from the sixth channel and/or the eleventh channel, executes the address conversion task and adds the DTU carrying the execution result of the address conversion task to an eighth channel, the seventh task processing unit acquires the DTU from the seventh channel and/or the eighth channel, executes a data assembly task, and the eighth task processing unit acquires the DTU from the ninth channel, executes the garbage collection task, and adds the DTU carrying the execution result of the garbage collection task to the tenth channel and/or the eleventh channel.
According to a fifth aspect of the present application, there is provided a method for constructing a first upstream data path according to the fifth aspect of the present application, comprising writing, in one or more task processing units of a downstream data path, one or more callback function indexes into a data transfer unit DTU to construct an upstream data path for the DTU, wherein one or more callback functions indicated by the one or more callback function indexes constitute the upstream data path, the DTU carries a subcommand, and calling, in response to completion of the subcommand processing, the callback function indicated by the one or more callback function indexes recorded in the DTU to return a processing result of the subcommand through the upstream data path.
According to a first upstream data path construction method of a fifth aspect of the present application, there is provided a second upstream data path construction method of the fifth aspect of the present application, wherein one or more callback function indexes in the DTU are ordered, callback functions indicated by the one or more callback function indexes in the DTU that are sequentially called constitute the upstream data path, and wherein an order in which callback functions indicated by the one or more callback function indexes in the DTU are called is an inverse order in which the one or more callback function indexes are written to the DTU in the upstream data path.
According to a construction method of a first or second uplink data path of a fifth aspect of the present application, a construction method of a third uplink data path of the fifth aspect of the present application is provided, when a first task processing unit processes the subcommand, a first callback function index is written to the DTU, wherein the callback function indicated by the first callback function index is used for releasing a first resource allocated by the first task processing unit for processing the subcommand, and the first task processing unit is a task processing unit in the downlink data path.
According to one of the construction methods of the first to third uplink data paths of the fifth aspect of the present application, there is provided the construction method of the fourth uplink data path of the fifth aspect of the present application, when the second task processing unit processes the subcommand, a second callback function index is written to the DTU, wherein the callback function indicated by the second callback function index is used for releasing the first resource allocated by the first task processing unit for processing the subcommand, and the second task processing unit is a task processing unit in the downlink data path.
According to one of the construction methods of the first to fourth uplink data paths of the fifth aspect of the present application, there is provided the construction method of the fifth uplink data path of the fifth aspect of the present application, wherein the downlink data path includes a plurality of task processing units, each of the plurality of task processing units writes a callback function index in the DTU during processing of the subcommand, wherein the subcommand is acquired from the DTU for each task processing unit, and the callback function written in the DTU by each task processing unit is the same or different.
According to one of the construction methods of the first to fifth upstream data paths of the fifth aspect of the present application, there is provided the construction method of the sixth upstream data path according to the fifth aspect of the present application, the upstream data path further comprising a monitoring unit which is the first task processing unit of the upstream data path, the monitoring unit monitoring and recognizing whether the subcommand is processed or not, and in response to the subcommand processing being completed, the monitoring unit acquiring the DTU indicating the processing result of the subcommand.
According to a sixth uplink data path construction method of the fifth aspect of the present application, there is provided a seventh uplink data path construction method of the fifth aspect of the present application, wherein the last task processing unit of the downlink data path sends the DTU to a subcommand processing unit, the subcommand processing unit caches the DTU, processes the subcommand indicated by the DTU, and provides the processing result of the subcommand to the monitoring unit.
According to one of the first to seventh upstream data path construction methods of the fifth aspect of the present application, there is provided the eighth upstream data path construction method according to the fifth aspect of the present application, the one or more task processing units being schedulable.
According to one of the construction methods of the first to eighth uplink data paths of the fifth aspect of the present application, there is provided the construction method of the ninth uplink data path of the fifth aspect of the present application, wherein the callback function last called in the uplink data path sends the DTU to a command transmission unit, the command transmission unit acquires the processing result of the subcommand according to the instruction of the DTU, returns the processing result of one or more subcommand to the sender of the command, the command including the one or more subcommand, and the command transmission unit releases the DTU.
According to a ninth upstream data path constructing method of the fifth aspect of the present application, there is provided the tenth upstream data path constructing method of the fifth aspect of the present application, wherein the command transmitting unit splits the command into one or more subcommands in response to receiving the command, assigns a DTU to each subcommand, and indicates one of the subcommands in the assigned DTU.
According to a sixth aspect of the present application, there is provided the first information processing apparatus according to the sixth aspect of the present application, comprising a memory, a processor, and a program stored on the memory and executable on the processor, the processor implementing the method according to any one of the fifth aspects when executing the program.
According to a seventh aspect of the present application, there is provided a construction device for a first uplink data path according to the seventh aspect of the present application, comprising a callback function index generating unit configured to write one or more callback function indexes into one or more task processing units of a downlink data path to construct an uplink data path for the DTU, wherein one or more callback functions indicated by the one or more callback function indexes constitute the uplink data path, the DTU carries a subcommand, one task processing unit of the downlink data path acquires and processes the subcommand indicated in the DTU and provides the DTU to another task processing unit of the downlink data path, and a callback function index calling unit configured to sequentially call one or more callback functions recorded in the DTU in response to completion of the processing of the subcommand to return a processing result of the subcommand through the uplink data path.
According to an eighth aspect of the present application, there is provided a first data processing system according to the eighth aspect of the present application, comprising an upstream data path and a downstream data path, the downstream data path comprising one or more task processing units, wherein a first task processing unit of the plurality of task processing units is coupled to a command transmission unit, a second task processing unit of the plurality of task processing units is coupled to a subcommand processing unit, the downstream data path processes a DTU provided by the command transmission unit and constructs the upstream data path for the DTU during processing of the DTU, and processing results of the subcommand are obtained from the command transmission unit through the upstream data path in response to completion of processing by the subcommand processing unit.
According to a first data processing system of an eighth aspect of the present application, there is provided a second data processing system of the eighth aspect of the present application, the upstream data path further comprising a monitor unit, the subcommand processing unit being coupled to the monitor unit, the monitor unit obtaining a subcommand processing completion instruction from the subcommand processing unit, the monitor unit calling a callback function indicated by a data transfer unit DTU in response to the subcommand processing completion instruction to provide a processing result of the subcommand to the command transfer unit through the upstream data path.
According to a first or second data processing system of an eighth aspect of the present application, there is provided a third data processing system of the eighth aspect of the present application, wherein the monitoring unit acquires the DTU according to the subcommand processing completion instruction, and acquires at least one callback function index from the DTU, wherein the callback function index indicates a callback function, the at least one callback function index being written into the DTU when one or more task processing units of the downstream data path process subcommands, and wherein the at least one callback function constitutes the upstream data path.
According to a third data processing system of an eighth aspect of the present application, there is provided a fourth data processing system of the eighth aspect of the present application, the monitoring unit calls a callback function indicated by the at least one callback function index.
According to a fourth data processing system of an eighth aspect of the present application, there is provided a fifth data processing system of the eighth aspect of the present application, the monitoring unit sequentially calls the at least one callback function corresponding to the at least one callback function index in positive order in the write order of the at least one callback function index.
According to a fourth data processing system of an eighth aspect of the present application, there is provided a sixth data processing system of the eighth aspect of the present application, the monitoring unit sequentially calls the at least one callback function corresponding to the at least one callback function index in reverse order in the write order of the at least one callback function index.
According to one of the fourth to sixth data processing systems of the eighth aspect of the present application, there is provided a seventh data processing system of the eighth aspect of the present application, executing a first callback function of the at least one callback function to send a processing result of the subcommand to the command unit in response to the at least one callback function being called, wherein the first callback function is a callback function written in the DTU when the subcommand is processed by the first task processing unit.
According to a seventh data processing system of the eighth aspect of the present application, there is provided an eighth data processing system of the eighth aspect of the present application, executing the first callback function to free up the first resource.
According to a seventh or eighth data processing system of the eighth aspect of the present application, there is provided a ninth data processing system of the eighth aspect of the present application, further comprising a resource manager, wherein when the first callback function is executed, a release of a first resource is requested from the resource manager, the first resource being an allocation of the first task processing unit to the resource manager when processing a subcommand, and wherein the resource manager releases the first resource in response to the request for releasing the first resource.
According to one of the fourth to ninth data processing systems of the eighth aspect of the present application, there is provided the tenth data processing system of the eighth aspect of the present application, wherein in response to the at least one callback function being called, a second callback function of the at least one callback function is executed to request release of a second resource from the resource manager, the second resource being an allocation that the second task processing unit requests from the resource manager when processing a subcommand, and in response to the request to release the second resource, the resource manager releases the second resource.
According to a tenth data processing system of the eighth aspect of the present application, there is provided an eleventh data processing system of the eighth aspect of the present application, the monitoring unit calls the first callback function after calling the second callback function.
According to one of the first to eleventh data processing systems of the eighth aspect of the present application, there is provided the twelfth data processing system of the eighth aspect of the present application, the monitoring unit polls the subcommand processing unit, acquires the subcommand processing completion indication.
According to one of the first to twelfth data processing systems of the eighth aspect of the present application, there is provided a thirteenth data processing system according to the eighth aspect of the present application, the at least one task processing unit being a processing unit that can be scheduled.
According to a ninth aspect of the present application there is provided a first data processing system according to the ninth aspect of the present application comprising a command transmission unit and a plurality of data lanes, wherein the data lanes comprise a downstream data lane and an upstream data lane, the downstream data lane comprises one or more task processing units, the downstream data lane is coupled to the command transmission unit, the command transmission unit allocates DTUs to one or more first subcommands to be processed to carry subcommands, and the DTUs carrying subcommands are provided to the downstream data lane of the first data lane of the plurality of data lanes, wherein the first subcommands are associated with commands to access a first device, and the first data lane corresponds to the first device.
According to a ninth aspect of the present application there is provided a second data processing system according to the ninth aspect of the present application, the first task processing unit of the downstream data path being coupled to the command transmission unit, the command transmission unit providing the DTU carrying the first subcommand to the first task processing unit of the downstream data path of the first data path.
According to a ninth aspect of the present application there is provided a third data processing system according to the ninth aspect of the present application, the command transmission unit allocates a DTU to one or more second subcommands to be processed to carry the subcommand, and provides the DTU carrying the second subcommand to a first task processing unit of a downstream data path of a second data path, wherein the second subcommand is associated with a command to access a second device, and the second data path corresponds to the second device.
According to one of the first to third data processing systems of the ninth aspect of the present application, there is provided a fourth data processing system according to the ninth aspect of the present application, the plurality of data paths each corresponding to a device to which a command is to be accessed, and the command transmission unit determining one of the data paths to which a subcommand associated with the command is to be provided, based on the device to which the command is to be accessed.
According to one of the first to fourth data processing systems of the ninth aspect of the present application, there is provided a fifth data processing system according to the ninth aspect of the present application, the downstream data path being the downstream data path as in any of the fourth aspects above, and/or the upstream data path being the upstream data path as in any of the eighth aspects above.
According to one of the first to fifth data processing systems of the ninth aspect of the present application, there is provided a sixth data processing system according to the ninth aspect of the present application, further comprising a first subcommand processing unit, the first subcommand processing unit being coupled to a downstream data path of a first plurality of the data paths, a last task processing unit of the downstream data path of the first plurality of the data paths sending a DTU to the first subcommand processing unit, the first subcommand processing unit buffering the DTU received from the downstream data path of the first plurality of the data paths and processing subcommands indicated by the received DTU.
According to a sixth data processing system of a ninth aspect of the present application, there is provided a seventh data processing system of the ninth aspect of the present application, the first subcommand processing unit being coupled to a monitoring unit of each of the upstream data paths of the first plurality of data paths, the monitoring unit obtaining processing results of subcommands from the first subcommand processing unit, the monitoring unit obtaining a DTU carrying processed subcommands, obtaining at least one callback function index from the DTU processing the completed subcommands, and calling a callback function indicated by the at least one callback function index.
According to a sixth or seventh data processing system of the ninth aspect of the present application, there is provided an eighth data processing system of the ninth aspect of the present application, the first subcommand processing unit being coupled to one or more first NVM chips and/or the first plurality of data paths each being associated with one of a plurality of namespaces.
According to an eighth data processing system of the ninth aspect of the present application, there is provided a ninth data processing system of the ninth aspect of the present application, each of the first plurality of data paths being coupled to a first resource manager, a task processing unit of a downstream data path of each of the first plurality of data paths accessing a first resource through the first resource manager.
According to a ninth data processing system of the ninth aspect of the present application, there is provided a tenth data processing system of the ninth aspect of the present application, the first resource being associated with the one or more first NVM chips.
According to a tenth data processing system of the ninth aspect of the present application, there is provided an eleventh data processing system of the ninth aspect of the present application, the first plurality of data paths each being coupled to one of a plurality of second resource managers, the task processing units of the first plurality of data paths each downstream data path accessing the second resources through the coupled second resource manager.
According to one of the sixth to eleventh data processing systems of the ninth aspect of the present application, there is provided a twelfth data processing system of the ninth aspect of the present application, the first subcommand processing unit is coupled to one or more first NVM chips and/or the first plurality of data paths are each associated with one of a SATA protocol compliant storage device, an open channel protocol compliant storage device, a key value (K-V) storage protocol compliant storage device, an NVMe protocol compliant storage device, and/or an NVMe protocol compliant plurality of namespaces.
According to one of the sixth to twelfth data processing systems of the ninth aspect of the present application, there is provided a thirteenth data processing system of the ninth aspect of the present application further comprising a second subcommand processing unit, a downstream data path of a third data path of the plurality of data paths being coupled to the second subcommand processing unit, the downstream data path of the third data path transmitting a DTU to the second subcommand processing unit, the second subcommand processing unit processing a third subcommand indicated by the received DTU, wherein the third subcommand is associated with a command to access a third device, and the third data path corresponds to the third device.
According to a thirteenth data processing system of the ninth aspect of the application, there is provided a fourteenth data processing system of the ninth aspect of the application, the second subcommand processing unit being coupled to one or more random access memory chips, the third data path being associated with a non-volatile memory device.
According to a fourteenth data processing system of a ninth aspect of the present application, there is provided the fifteenth data processing system of the ninth aspect of the present application, wherein the second subcommand processing unit acquires the processing result of the third subcommand and acquires the DTU carrying the processed third subcommand, and supplies the third subcommand carrying the processed third subcommand to the command transmitting unit through an upstream data path of the third data path.
According to one of sixth to fifteenth data processing systems of the ninth aspect of the present application, there is provided the sixteenth data processing system according to the ninth aspect of the present application, the command transmitting unit supplies a subcommand associated with the management command to the first task processing unit of the fourth data path of the plurality of data paths.
According to one of the first to sixteenth data processing systems of the ninth aspect of the present application, there is provided a seventeenth data processing system of the ninth aspect of the present application, wherein each of the plurality of data paths is coupled to a first resource manager that manages a first resource, the first resource manager handles conflicts in use of multiple instances of the first resource by the plurality of data paths, and/or wherein each of the plurality of data paths is coupled to one of a plurality of second resource managers that manages a second resource, each of the second resource managers monopolizing one or more instances of the second resource.
According to a tenth aspect of the present application there is provided a first data processing method according to the tenth aspect of the present application applied to a data processing system, the data processing unit comprising a command transmission unit, a plurality of data lanes and at least one subcommand processing unit, each data lane comprising an upstream data lane and a downstream data lane, the method comprising the command transmission unit allocating a DTU for one or more first subcommands to carry subcommands and providing the DTU carrying the subcommand to the downstream data lane of a first one of the plurality of data lanes, wherein the first subcommand is associated with a command to access a first device and the first data lane corresponds to the first device.
According to a first data processing method of a tenth aspect of the present application, there is provided a second data processing method according to the tenth aspect of the present application, the command transmission unit providing the DTU carrying the first subcommand to the first task processing unit of the downstream data path of the first data path.
According to a tenth aspect of the present application there is provided a first or second data processing method according to the tenth aspect of the present application, the command transmission unit allocates a DTU to one or more second subcommands to be processed to carry the subcommand, and provides the DTU carrying the second subcommand to a first task processing unit of a downstream data path of a second data path, wherein the second subcommand is associated with a command to access a second device, and the second data path corresponds to the second device.
According to one of the first to third data processing methods of the tenth aspect of the present application, there is provided a fourth data processing method according to the tenth aspect of the present application, wherein the plurality of data paths each correspond to a device to which a command is to be accessed, the method further comprising the command transmission unit determining one of the data paths to which a subcommand associated with the command is to be provided, in accordance with the device to which the command is to be accessed.
According to one of the first to fourth data processing methods of the tenth aspect of the present application, there is provided the fifth data processing method according to the tenth aspect of the present application, the downstream data path is the downstream data path as in any one of the fourth aspect described above, and/or the upstream data path is the upstream data path as in any one of the eighth aspect described above.
According to one of the first to fifth data processing methods of the tenth aspect of the present application, there is provided a sixth data processing method according to the tenth aspect of the present application, wherein each of the downstream data paths of the first plurality of data paths is coupled to a first subcommand processing unit, the method further comprising each of the downstream data paths of the first plurality of data paths transmitting a DTU to the first subcommand processing unit, the first subcommand processing unit buffering a plurality of DTUs and processing a plurality of subcommands indicated by the plurality of DTUs, the plurality of DTUs being transmitted to the first subcommand processing unit for the first plurality of data paths.
According to a sixth data processing method of a tenth aspect of the present application, there is provided a seventh data processing method of the tenth aspect of the present application, the data processing system further comprising a monitoring unit, the first subcommand processing unit being coupled to the monitoring unit, the method further comprising the monitoring unit obtaining a processing result of a subcommand from at least one subcommand processing unit, the monitoring unit obtaining a DTU carrying the processed subcommand, obtaining at least one callback function index from the DTU processing the completed subcommand, and calling a callback function indicated by the at least one callback function index.
According to one of the first to seventh data processing methods of the tenth aspect of the present application, there is provided an eighth data processing method according to the tenth aspect of the present application, the data processing system further comprising a resource manager, each of the first plurality of data paths being coupled to the first resource manager, the method further comprising the task processing unit of the downstream data path of each of the first plurality of data paths accessing the first resource through the first resource manager.
According to a seventh or eighth data processing method of the tenth aspect of the present application, there is provided the ninth data processing method of the tenth aspect of the present application, further comprising merging the processing results of the one or more subcommands into the processing result of one command after the command transmission unit receives the processing results of the one or more subcommands provided by the upstream data path through which the data passes.
According to a ninth data processing method of a tenth aspect of the present application, there is provided the tenth data processing method of the tenth aspect of the present application, further comprising merging the processing results of all subcommands associated with one command into the processing result of one command after receiving the processing results of all subcommands.
According to a tenth aspect of the present application, there is provided a sixth or seventh data processing method according to the tenth aspect of the present application, the data processing system further comprising a second subcommand processing unit, the downstream data path of a third data path of the plurality of data paths being coupled to the second subcommand processing unit, the method further comprising the downstream data path of the third data path sending a DTU to the second subcommand processing unit, the second subcommand processing unit processing a third subcommand indicated by the received DTU, wherein the third subcommand is associated with a command to access a third device, and the third data path corresponds to the third device.
According to an eleventh data processing method of the tenth aspect of the present application, there is provided the twelfth data processing method of the tenth aspect of the present application, the second subcommand processing unit being coupled to one or more random access memory chips, the third data path being associated with a nonvolatile memory device.
According to a twelfth data processing method of a tenth aspect of the present application, there is provided the thirteenth data processing method of the tenth aspect of the present application, further comprising the second subcommand processing unit acquiring a processing result of the third subcommand and acquiring a DTU carrying the processed third subcommand, the third subcommand carrying the processed third subcommand being supplied to the command transmitting unit through an upstream data path of the third data path.
According to one of the first to thirteenth data processing methods of the tenth aspect of the present application, there is provided the fourteenth data processing method according to the tenth aspect of the present application, wherein each of the plurality of data paths is coupled to a first resource manager that manages a first resource, the first resource manager handles conflicts in use of a plurality of instances of the first resource by the plurality of data paths, and/or each of the plurality of data paths is coupled to one of a plurality of second resource managers that manages a second resource, each of the second resource managers monopolizing one or more instances of the second resource.
According to an eleventh aspect of the present application there is provided a first memory device according to the eleventh aspect of the present application comprising at least one memory chip and at least one data path, each data path corresponding to one or more memory chips, in response to receiving a command, the memory device accessing the memory chip corresponding to one of the at least one data paths through the one data path and performing operations indicated by the command, the operations comprising a read operation, a write operation and an erase operation.
According to a first storage device of an eleventh aspect of the present application, there is provided a second storage device of the eleventh aspect of the present application, the at least one data path comprising at least one of a first class path, a second class path and a third class path, the first class path, the second class path and the third class path having at least one identical or different task processing unit.
According to a second storage device of the eleventh aspect of the present application, there is provided a third storage device of the eleventh aspect of the present application, the at least one data path further comprising a fourth class of paths, the fourth class of paths handling management commands.
According to a second storage device of an eleventh aspect of the present application, there is provided a fourth storage device of the eleventh aspect of the present application, each of the data paths including a first task processing unit, a second task processing unit, and a third task processing unit, wherein the first task processing unit processes a cache task, and the second task processing unit processes an address conversion task.
According to a second or fourth storage device of the eleventh aspect of the present application, there is provided a fifth storage device of the eleventh aspect of the present application, the second class of lanes and the first class of lanes comprising a third task processing unit that processes data assembly tasks.
According to a second, fourth or fifth storage device of the eleventh aspect of the present application there is provided a sixth storage device according to the eleventh aspect of the present application, the first class of path comprising a fourth task processing unit, the fourth task processing unit processing garbage collection tasks.
According to one of the first to sixth memory devices of the eleventh aspect of the present application, there is provided the seventh memory device of the eleventh aspect of the present application, wherein the memory chip corresponding to the third type of via is a RAM chip, and the memory chip corresponding to the first type of via or the second type of via is an NVM chip.
According to one of the first to seventh storage devices of the eleventh aspect of the present application, there is provided an eighth storage device of the eleventh aspect of the present application, further comprising a command transmission unit that splits the command into at least one subcommand in response to receiving the command, and determines a type of data path to be provided by the subcommand split by the command according to identification information carried by the command.
According to an eighth storage device of an eleventh aspect of the present application, there is provided the ninth storage device of the eleventh aspect of the present application, the command transmitting unit transmits the at least one subcommand to a first data path to access a first memory chip through the first data path, wherein the identification information indicates that an object of an operation is the first memory chip, and the first data path is a data path corresponding to the first memory chip.
According to an eighth or ninth storage device of the eleventh aspect of the present application, there is provided the tenth storage device of the eleventh aspect of the present application, the command transmission unit determines the first memory chip to be accessed according to the identification information, and determines the first data path according to a correspondence relationship between memory chips and data paths.
According to an eleventh aspect of the present application, there is provided a first storage device according to the eleventh aspect of the present application, the storage device comprising a plurality of command transmission units each corresponding to one data path and a command distribution unit, in response to receiving a command, the command distribution unit determining a first command transmission unit corresponding to a first data path according to identification information carried by the command, the identification information indicating that an object of an operation is the first storage chip, the first data path being a data path corresponding to the first storage chip, splitting the command into at least one subcommand, and transmitting the at least one subcommand to the first data path to access the first storage chip through the first data path.
According to a twelfth aspect of the present application there is provided a first data path according to the twelfth aspect of the present application comprising a channel, at least one task processing unit and at least one resource manager, the at least one resource manager managing different types of resources, the at least one task processing unit requesting allocation of resources to the at least one resource manager when processing subcommands, the at least one task processing unit allocating the obtained resources to a DTU in response to obtaining the resources, the DTU being obtained from the channel by the at least one task processing unit, the DTU carrying subcommands, the at least one task processing unit releasing the resources to the at least one resource manager in response to completion of resource usage allocated to the DTU.
According to a twelfth aspect of the present application, there is provided the second data path according to the twelfth aspect of the present application, wherein the at least one task processing unit determines whether to request resources from at least one resource manager according to the type of the subcommand when the subcommand is processed, and does not request resources from the at least one resource manager when the determination result is negative.
According to a first or second data path of a twelfth aspect of the present application there is provided a third data path of the twelfth aspect of the present application, at least one task processing unit releasing resources to at least one resource manager after the plurality of resources are all used when the at least one task processing unit allocates the plurality of resources to the DTU, the at least one resource manager recycling the plurality of resources in response to the resources being released.
According to a third data path of a twelfth aspect of the present application, there is provided a fourth data path of the twelfth aspect of the present application, each of the at least one task processing unit releasing at least one resource allocated to the DTU itself, or a first task processing unit of the at least one task processing unit releasing at least one resource allocated to the DTU by a second task processing unit of the at least one task processing unit.
According to one of the first to fourth data paths of the twelfth aspect of the present application, there is provided a fifth data path according to the twelfth aspect of the present application, each task processing unit allocates one type of resource to the DTU according to the type of resource, and each task processing unit releases at least one resource of its corresponding resource type.
According to a twelfth aspect of the present application there is provided a sixth data path according to the twelfth aspect of the present application, the data path comprising a resource manager that manages at least one type of resource, the one resource manager allocating at least one resource and providing the at least one resource to at least one task processing unit in response to a resource allocation request, wherein the at least one resource is of a different type when the number of at least one resource is greater than 1.
According to a first data path of a twelfth aspect of the present application, there is provided a seventh data path according to the twelfth aspect of the present application, the data path comprising a plurality of resource managers managing a plurality of types of resources, the plurality of resource managers allocating a plurality of different types of resources and providing the plurality of different types of resources to a plurality of task processing units in response to a resource allocation request.
According to one of the first to seventh data paths of the twelfth aspect of the present application, there is provided an eighth data path of the twelfth aspect of the present application, when the at least one task processing unit allocates the obtained resources to the DTU, the at least one task processing unit further writes a callback function index in the DTU, the callback function index pointing to a callback function in the at least one task processing unit.
According to an eighth data path of the twelfth aspect of the present application, there is provided a ninth data path according to the twelfth aspect of the present application, the callback function releasing resources to at least one resource manager when called.
According to one of the first to ninth data paths of the twelfth aspect of the present application, there is provided the tenth data path of the twelfth aspect of the present application, wherein the at least one task processing unit sends a resource release request to the at least one resource manager during the data downlink transmission or the data uplink transmission.
According to one of the first to tenth data paths of the twelfth aspect of the present application, there is provided an eleventh data path of the twelfth aspect of the present application, after the at least one task processing unit acquires the DTU from the channel, judging whether the use of the resources allocated to the DTU is completed, and when the judgment result is yes, releasing the resources to the at least one resource manager by the at least one task processing unit.
According to an eleventh data path of the twelfth aspect of the present application, there is provided the twelfth data path of the twelfth aspect of the present application, if it is judged that the use of the plurality of resources allocated to the DTU is completed, the at least one task processing unit sequentially releases the plurality of resources in a positive or reverse order of the plurality of resource allocation order.
According to one of the first to twelfth data paths of the twelfth aspect of the present application there is provided a thirteenth data path according to the twelfth aspect of the present application, the channel comprising a monitoring unit which monitors the occurrence of adding or removing a DTU to or from the channel in which it is located and in response informs one or more resource managers.
According to a thirteenth data path of a twelfth aspect of the present application there is provided a fourteenth data path of a twelfth aspect of the present application, the monitoring unit records a monitoring function with which one or more resource managers are registered, and in response to the monitoring unit adding or taking out a DTU to or from a channel, calls the registered monitoring function to notify the one or more resource managers.
According to a thirteenth or fourteenth data path of the twelfth aspect of the present application there is provided a fifteenth data path according to the twelfth aspect of the present application, the monitoring unit informs the resource manager to request allocation of resources for said DTU, and/or the monitoring unit informs the resource manager that the state of the resources it manages has changed.
According to a thirteenth aspect of the present application there is provided a first resource management method according to the thirteenth aspect of the present application applied to a task processing unit, the method comprising obtaining a DTU from a channel, the DTU carrying a subcommand, requesting allocation of resources to at least one resource manager when the subcommand is processed, allocating the obtained resources to the DTU in response to obtaining resources from the resource manager, and releasing the resources to the at least one resource manager in response to completion of use of the resources allocated to the DTU.
According to a first resource management method of a thirteenth aspect of the present application, there is provided a second resource management method of the thirteenth aspect of the present application, the method further comprising, before requesting allocation of resources to at least one resource manager, judging whether to request allocation of resources according to the type of the subcommand, and not requesting allocation of resources when the judgment result is negative.
According to a thirteenth aspect of the present application, there is provided the third resource management method according to the thirteenth aspect of the present application, wherein the requesting of the allocation of resources to at least one resource manager when the subcommand is processed includes requesting the allocation of resources to one resource manager to cause the one resource manager to be allocated to one resource, or requesting the allocation of resources to one resource manager to cause the one resource manager to be allocated to a plurality of resources, or requesting the allocation of resources to a plurality of resource managers to cause the plurality of resource managers to be allocated to a plurality of resources, wherein the allocating of the plurality of resources by the plurality of resource managers includes allocating one resource by the plurality of resource managers, respectively, each of the plurality of resources being different in type of resource.
According to one of the first to third resource management methods of the thirteenth aspect of the present application, there is provided the fourth resource management method of the thirteenth aspect of the present application, the allocating the obtained resources to the DTU in response to obtaining the resources from the resource manager, including allocating one resource to the DTU when the one resource is obtained from the resource manager, or allocating a plurality of resources to the DTU in the order of the requested allocation when the plurality of resources are obtained from the resource manager.
According to a fourth resource management method of a thirteenth aspect of the present application, there is provided the fifth resource management method of the thirteenth aspect of the present application, further writing callback function indexes in the DTU when allocating resources to the DTU, including writing at least one callback function index in the DTU when allocating a plurality of resources to the DTU, wherein writing a plurality of callback function indexes in the DTU includes writing indexes of a plurality of callback functions in a positive order or a reverse order of the writing order in an allocation order of the plurality of resources.
According to one of the first to fifth resource management methods of the thirteenth aspect of the present application, there is provided the sixth resource management method according to the thirteenth aspect of the present application, the method further comprising, before the releasing of the resource to at least one resource manager in response to completion of the use of the resource allocated to the DTU, judging whether or not the use of the resource allocated to the DTU is completed;
And if the judgment result is yes, determining that the use of the resources allocated to the DTU is completed.
According to a fifth or sixth resource management method of the thirteenth aspect of the present application, there is provided a seventh resource management method of the thirteenth aspect of the present application, the releasing resources to at least one resource manager in response to completion of use of resources allocated to a DTU, comprising calling at least one callback function through at least one callback function index in the DTU to release at least one resource, wherein when a plurality of callback functions are called, the plurality of callback functions are sequentially called in a positive or reverse order in an allocation order of the plurality of resources.
According to a fourteenth aspect of the present application, there is provided the first information processing apparatus according to the fourteenth aspect of the present application, comprising a memory, a processor, and a program stored on the memory and executable on the processor, the processor implementing the method according to any one of the thirteenth aspects described above when executing the program.
According to a fifteenth aspect of the present application there is provided a first resource management method according to the fifteenth aspect of the present application, for use in a resource manager, the method comprising allocating resources for at least one task processing unit in response to a request by the at least one task processing unit, and recovering the allocated resources in response to a release request by the at least one task processing unit.
According to a first resource management method of a fifteenth aspect of the present application, there is provided a second resource management method of the fifteenth aspect of the present application, the allocating resources for at least one task processing unit in response to at least one task processing unit request including allocating one resource for the one task processing unit in response to the one task processing unit request, or allocating a plurality of resources for the one task processing unit in response to the one task processing unit request, the plurality of resources being different in type, or allocating a plurality of resources for the plurality of task processing units in response to the plurality of task processing unit request, the plurality of resources being the same or different in type, and each of the plurality of task processing units obtaining at least one resource.
According to a fifteenth aspect of the present application, there is provided the third resource management method according to the fifteenth aspect of the present application, wherein the reclaiming the allocated resources in response to the at least one task processing unit releasing the resources includes reclaiming the allocated one resource in response to the one task processing unit releasing the one resource, or reclaiming the allocated plurality of resources in response to the one task processing unit releasing the one resource, the plurality of resources being different in type, or reclaiming the allocated plurality of resources in response to the plurality of task processing units releasing the plurality of resources, the plurality of resources being the same or different in type.
According to a sixteenth aspect of the present application, there is provided the first information processing apparatus according to the sixteenth aspect of the present application, comprising a memory, a processor and a program stored on the memory and executable on the processor, the processor implementing the method according to any one of the fifteenth aspects described above when executing the program.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings may be obtained according to these drawings for a person having ordinary skill in the art.
FIG. 1A is a schematic diagram of task scheduling in the prior art;
FIG. 1B is a block diagram of a prior art task processing system;
FIG. 2A is a block diagram of a task processing system provided by an embodiment of the present application;
FIG. 2B is a block diagram of a task processing unit according to an embodiment of the present application;
fig. 2C is a schematic flow chart of processing a DTU according to an embodiment of the present application;
FIG. 2D is a block diagram of a channel provided by an embodiment of the present application;
Fig. 3A is a schematic diagram of a downlink data path according to an embodiment of the present application;
fig. 3B is a schematic flow chart of constructing a downlink path according to an embodiment of the present application;
FIG. 3C is a schematic diagram of another downstream data path according to an embodiment of the present application;
FIG. 4A is a schematic diagram of another downstream data path according to an embodiment of the present application;
FIG. 4B is a schematic diagram of another downstream data path according to an embodiment of the present application;
FIG. 5A is a schematic diagram of an upstream data channel according to an embodiment of the present application;
FIG. 5B is a schematic diagram of another upstream data path according to an embodiment of the present application;
FIG. 5C is a schematic flow chart of uplink path construction according to an embodiment of the present application;
Fig. 5D is a schematic diagram of a DTU according to an embodiment of the present application;
FIG. 6A is a schematic diagram of resource management according to an embodiment of the present application;
FIG. 6B is a schematic diagram of another resource management provided by an embodiment of the present application;
FIG. 6C is a schematic diagram of another resource management provided by an embodiment of the present application;
FIG. 7 is a block diagram of a memory device according to an embodiment of the present application;
FIG. 8A is a block diagram of yet another memory device provided by an embodiment of the present application;
fig. 8B is a block diagram of still another storage device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
FIG. 2A illustrates a block diagram of a task processing system for a storage device in accordance with an embodiment of the present application.
The CPU of the control part of the storage device runs software (also called firmware). The running software comprises a scheduler, a task processing unit, a channel and a resource manager. The scheduler schedules the operation of the task processing units. A task processing unit is a thread, process, task, or other software unit of the operating system that may be scheduled by a scheduler.
The task processing units, lanes and resource managers each include one or more instances, 4 task processing units (210, 212, 214 and 216), 4 lanes (220, 222, 224 and 226) and 2 resource managers (240 and 245) are shown in FIG. 2A.
The channels are used for communication between task processing units. The message Unit carried by the channel is called a data transmission Unit (DTU, dataTransfer Unit). By way of example, the DTUs are in one-to-one correspondence with subcommands. Each subcommand is assigned a DTU to carry the subcommand related context and track the subcommand processing and results. DTU is an example of a data structure such as a package. The subcommands access the same size of memory space, for example. The channel includes, for example, a DTU list to accommodate one or more DTUs.
The channel also includes functions that handle DTU operations, such as Push functions that add DTUs to the DTU list and/or functions that retrieve DTUs from the DTU list. Optionally, a custom callback function may also be registered with the channel. Callback functions registered with the channel are used, for example, to invoke services provided by the resource manager.
According to an embodiment of the application, a channel instance may be bound to a task processing unit. The channel receives DTUs from one or more of the bound task processing units and provides DTUs to only one of the bound task processing units.
According to an embodiment of the application, only the task processing unit is enabled to use the channel, and the scheduler cannot call or schedule the channel instance. The index of the channel instance to which it is bound is recorded in the task processing unit so that the task processing unit can operate on the channel instance by means of, for example, a Push/Pop function.
And binding channel examples among the task processing unit examples, so that the task processing unit acquires the DTU from the bound channel, processes the subcommand borne by the DTU, and adds the processed subcommand to other channels through the DTU. The task processing unit instances are coupled by the channel such that the task processing unit instances are decoupled and executed concurrently and asynchronously. The execution of one task processing unit instance and the current state do not affect (e.g., block) execution of other task processing unit instances. If the control unit includes a plurality of CPUs, each CPU can process each task processing unit instance in parallel.
The resource manager manages the use, e.g., allocation, release, and/or reclamation, of specified resources. The resource manager is invoked only by the task processing unit and/or the channel, and is not invoked or scheduled by the channel. According to one embodiment, resources (e.g., FTL tables (recording a mapping of logical addresses of storage devices to physical addresses of NVM chips) or caches) shared by multiple task processing units in a task processing system are managed by a resource manager. The task processing unit accesses the appointed resource through the appointed resource management unit, and if a plurality of task processing unit instances conflict with the access of the appointed resource, the resource management unit solves the conflict through locking, queuing and the like.
FIG. 2B illustrates a block diagram of a task processing unit according to an embodiment of the application. FIG. 2C illustrates a flow chart implemented by a task processing unit according to an embodiment of the application.
Referring to fig. 2B, the task processing unit includes an inbound interface, an outbound interface, a DTU processing module, and optionally one or more callback functions.
The inbound interface obtains the DTU from the channel bound to the task processing unit. For example, a DTU is obtained from its DTU list by calling the Pop function of the bound channel. The outbound interface adds a DTU to the channel bound to the task processing unit. For example, add a DTU to its DTU list by calling the Push function of the bound channel. The DTU processing module extracts the subcommand from the DTU acquired from the inbound interface, processes the subcommand, loads the processed subcommand in the DTU, and adds the subcommand to the channel through the outbound interface. The DTU processing module optionally adds an index of one or more callback functions to the DTU to construct the upstream data path. Embodiments of constructing the upstream data path will be described in detail later.
Optionally, the task processing unit comprises two or more inbound interfaces, and/or two or more outbound interfaces. Each inbound interface is coupled to one of the lanes. Each outbound interface is coupled to one of the lanes.
According to an embodiment of the application, a task processing system provides templates, e.g., task processing units, channels, resource managers, for example, by a user of a programmer building the task processing units by copying the templates and adding the required code to the copied templates. Taking the task processing unit as an example, a reusable part (such as an inbound interface, an outbound interface, a DTU processing module) of the task processing unit is provided by a template, and a channel instance coupled with the inbound interface needs to be set for the task processing unit, and the channel instance coupled with the outbound interface and a function (such as allocating a buffer for the subcommand or querying a physical address to be accessed by the subcommand) of the task processing unit for processing the subcommand. Therefore, when the task processing unit is constructed, only the realization of the specific function of the task processing unit instance is needed, and the problems of how to acquire the sub-command to be processed, how to concurrently process the sub-command and the like are not needed.
Fig. 2C illustrates a flow chart of the DTU processing unit processing the DTU.
Referring to fig. 2c, the DTU processing unit (also referring to fig. 2B) obtains DTUs from channels to which each inbound interface of the task processing unit in which it is located is coupled (260). The DTU processing unit polls the inbound interface of each DTU to be processed. Optionally, the plurality of inbound interfaces of the task processing unit each have a priority, and the DTU processing unit obtains the DTU from each inbound interface according to the priority.
The acquired DTU carries the subcommand. The DTU processing unit obtains the subcommand from the DTU and processes the subcommand according to the content of the subcommand (262). Optionally, the DTU processing unit of each task processing unit performs a specific stage of processing of the subcommand. The processing of sub-commands by the plurality of task processing units may be the same or different. For example, two task processing units process the same phase of the subcommand, so that the two task processing units process the two subcommands in parallel. As yet another example, two task processing units process different phases of a subcommand, such that the two task processing units process the same subcommand sequentially.
The subcommand processed by the DTU processing unit is still carried in the DTU. The DTU processing unit issues (264) the DTU carrying its processed subcommand through the outbound interface. The channel to which the outbound interface is coupled in turn couples other task processing units. For example, the DTU processing unit selects one of the outbound interfaces to send the DTU according to the processing that the subcommand carried by the DTU is to undergo next. Still alternatively, the plurality of outbound interfaces of the task processing unit each have a priority, and the DTU processing unit sends the DTU through each of the outbound interfaces according to the priority.
Fig. 2D illustrates a block diagram of a channel according to an embodiment of the application.
The channel includes a DTU list and a plurality of functions (270, 272, 274) that operate on the DTU list. The DTU list accommodates a plurality of DTUs. A task processing unit coupled to the channel invokes (via the outbound interface) a Push function (270) for adding DTUs to the DTU list, and invokes (via the inbound interface) a Pop function (274) for retrieving DTUs from the DTU list.
From the DTU being added to the DTU list, a longer time may elapse until the DTU is fetched from the DTU list. For example, if a DTU list is implemented as a queue, the DTU is added to the tail of the queue and is not removed from the queue until it becomes the head of the queue. While some of the subcommands represented by DTUs are intended to be processed with low latency. Thus, optionally, the channel also includes a Push function (272). The Push function (272) is different from the Push function (270). A task processing unit coupled to the channel invokes a Push function (272) to provide the DTU to the channel, and in response, a direct forwarding unit of the channel retrieves the DTU from the Push function (272), also invokes a function indicated by the destination index, and provides the retrieved DTU to the function indicated by the destination index to complete delivery of the DTU. Thus, the DTU provided to the channel by the Push function (272) that calls the channel is provided to and immediately processed by the function indicated by the destination index, reducing the time that the DTU stays in the channel. The function indicated by the destination index is a function of the task processing unit that receives the DTU from the channel. The Push function (272), the direct forwarding unit and the destination index thus form the path of the fast processing DTU of the channel.
The DTU list of channels, the functions of the multiple operational DTU lists and the direct forwarding units are reusable parts thereof, provided by the templates. When a channel is instantiated, storage space is provided for the DTU list to accommodate the DTU. Optionally, a destination index is also set to indicate to the direct forwarding unit of the channel the function of receiving the DTU.
Optionally, the channel further comprises a monitoring unit. The monitoring unit monitors the operation (e.g., adding and/or removing) of the DTU list and/or monitors the direct forwarding operation to the DTU.
According to an alternative embodiment, one or more resource managers monitor, via a monitoring unit, the addition of DTUs to the DTU list via a Push function (270). So that the resource manager is made aware in time that the resources associated with the DTU are being used or that the DTU is allocated the required resources.
By way of example, the resource manager registers the monitoring function with the monitoring unit. In response to the Push function (270) being invoked, the registered one or more monitor functions are invoked. The DTU added to the channel by the Push function (270) or the parameters it carries are taken as parameters of the monitoring function.
Still by way of example, a DTU added to a channel and to be processed by another task processing unit requires some kind of resource, the allocation of which requires some time. The monitoring function applies for such resources to the corresponding resource manager in response to the DTU being added to the channel, thereby advancing the resource application operation appropriately. When a task processing unit acquires a DTU from a channel, such resources have been allocated to the DTU, thereby reducing the delay in processing the DTU (hiding the resource allocation time).
As yet another example, a resource manager may need to monitor the status of a resource. For example, a storage medium resource storing data may fail due to an update and be in a "failed" state in addition to having "free" and "used" states. The resource manager maintains the state of the storage medium, the transition from the "used" state to the "idle" state being triggered by the resource release operation, and the transition from the "idle" state to the "used" state being triggered by the resource allocation operation. According to an embodiment of the application, the resource manager learns of the transition from the "used" state to the "dead" state by monitoring the operation of the Push function (270) of the channel or the operation of the DTU list. For example, the monitoring function obtains a logical address and a physical address corresponding to the DTU carrying the write sub-command, and records that the state of the storage medium corresponding to the physical address is "failed". So that the task processing unit or other party does not otherwise have to notify the resource manager of such a state change.
Fig. 3A shows a schematic diagram of a downstream data path according to an embodiment of the present application.
According to an embodiment of the present application, the data path includes a downstream data path and an upstream data path. The downlink data path is used for processing the subcommand of the IO command, and the uplink data path is used for collecting and delivering the processing result of the subcommand.
The downstream data path includes task processing units (310, 312, 314, 316) and channels (320, 322, 324) that connect the task processing units and pass DTUs between the data processing units.
The channel connects two or more data processing units. The channel transmits DTU unidirectionally. The plurality of task processing units of the channel connection are thus producer-consumer relationships of the DTU. Referring to FIG. 3A, task processing unit 310 provides DTU to task processing unit 314 via channel 320, which is the producer, and task processing unit 314 is the consumer of the DTU produced by task processing unit 310. Task processing unit 312 also provides DTU to task processing unit 314 via channel 320. According to embodiments of the present application, a channel may be coupled to one or more task processing units that are DTU producers (e.g., task processing units 310, 312 relative to channel 320), but only one task processing unit that is DTU consumers (e.g., task processing unit 316 relative to channel 320). The task processing unit can send and receive DTUs to and from the plurality of channels. For example, task processing unit 312 sends DTUs to two channels (320 and 324) and task processing unit receives DTUs from two channels (322 and 324).
In alternative embodiments, the scheduler schedules execution of a task processing unit coupled to a channel as a consumer in response to a DTU of the channel to be processed, or places the task processing unit in a state that can be scheduled. After the task processing unit is scheduled to execute, the DTU is obtained from the channel to which it is coupled and processed.
The downstream data path may be constructed. To construct the downstream data path, one or more task processing units are created, and one or more channels are created. Bind the created channel to the created task processing unit and instruct the task processing unit to acquire the DTU from the channel or deliver the DTU through the channel.
The downstream data path is constructed at the time of initialization of the task processing system. Optionally, during operation of the task processing system, a downstream data path is constructed, or the constructed downstream data path is altered.
Fig. 3B shows a flow chart for constructing a downstream data path.
One or more task processing units (340) and one or more channels (350) of the downstream data path are created for constructing the downstream data path. By way of example, to construct the downstream data path shown in FIG. 3A, task processing units (310, 312, 314, and 316) and channels (320, 322, and 324) are created.
The created channels are bound to inbound/outbound interfaces of the task processing units according to the specified coupling relationships (360). Still by way of example, to construct the downstream data channel illustrated in FIG. 3A, channel 320 is bound to the outbound interface of task processing unit 310 and the outbound interface of task processing unit 312. Channel 320 is also bound to the inbound interface of task processing unit 314. The channel 322 is bound to the outbound interface of the task processing unit 314 and the inbound interface of the task processing unit 316. The channel 324 is bound to the outbound interface of the task processing unit 312 and the inbound interface of the task processing unit 316.
Fig. 3C shows a schematic diagram of a downstream data path according to yet another embodiment of the present application.
The cache management unit 380, the address mapping unit 382 and the array assembling unit 384 are task processing units that implement different processing functions for subcommands, respectively. The downstream data path 300 shown in fig. 3C includes a plurality of channels (370, 372, and 374), a plurality of task processing units (cache management unit 380, address mapping unit 382, and data assembling unit 384), and a plurality of resource managers (390, 392, and 394). The downstream data path illustrated in fig. 3C is thus used to implement the functionality of the memory device.
The memory device further includes a command transmission unit 302 and a subcommand processing unit 304. Optionally, the command transfer unit 302 exchanges IO commands with the host according to a specified storage protocol. The command transfer unit splits the IO command into one or more subcommands, carries the subcommands with the assigned DTU, and delivers the DTU to downstream data path 300. Downstream data path 300 performs one or more stages of processing on the subcommand, ultimately delivering the DTU to subcommand processing unit 304. The subcommand processing unit 304 converts subcommands carried by the DTU into commands to access the storage medium. By way of example, the subcommand processing unit 304 is a media interface controller. The subcommand processing unit 304 also refers to a storage medium to acquire a result of processing the subcommand, and an upstream data path through which data passes delivers the result of processing the subcommand to the command transmission unit 302. If necessary, the command transfer unit 302 will collect the processing results of all sub-commands split by the same IO command and indicate to the host that IO command processing is complete.
By way of example, the command transmission unit 302 adds a DTU to the channel 370, and the DTU carrying the subcommand is provided to the cache management unit 380 via the channel 370. The buffer management unit 380 allocates buffers for the subcommands and moves data accessed by the subcommands to the allocated buffers. The cache management unit 380 is associated with a resource manager 390. The resource manager 390 manages cache resources, for example, allocation and release of cache resources. The cache management unit 380 requests allocation of a cache to the resource manager 390 in response to a subcommand carried by the DTU retrieved from the channel 370. The buffer allocated for the subcommand is also recorded in the DTU carrying the subcommand. In response to moving the data accessed by the subcommand to the cache, the cache management unit 380 completes processing the subcommand and sends the DTU carrying the subcommand to channel 372.
Optionally, command transfer unit 302 processes IO commands that follow a variety of storage protocols including, for example, SAS/SATA protocols, open channel (OpenChannel) protocols, key-Value storage protocols, and/or NVMe protocols, among others.
The address mapping unit 380 obtains the DTU from the channel 372 and obtains the subcommand carried by the DTU. The address mapping unit 380 allocates physical addresses for the subcommands and establishes a mapping of logical addresses to be accessed by the subcommands to the physical addresses. The address mapping unit is associated with resource manager 392. The resource manager 392 manages an address mapping table in which mapping relationships of all logical addresses and physical addresses of the storage devices are recorded. The address mapping unit 382 responds to the DTU to request the resource manager 392 to allocate an entry associated with a logical address accessed by a subcommand carried by the DTU, in which the association of the logical address with a physical address is recorded. The entry allocated for the subcommand is also recorded in the DTU carrying the subcommand. In response to obtaining the physical address accessed by the subcommand, the cache management unit 382 completes processing the subcommand and sends the DTU carrying the subcommand to the channel 374.
The data assembly unit 384 obtains the DTU from the channel 374 and the subcommand carried by the DTU. The data assembling unit 384 assembles the data to be accessed by the subcommand so as to generate a command to write the data to the storage medium. The data assembly unit 384 is associated with the resource manager 394. The resource manager 394 manages accelerators (referred to as exclusive or units) for exclusive or computation. The data assembly unit 384 requests the resource manager 394 to allocate exclusive or units thereto in response to the DTU.
The data assembling unit 384 operates the subcommand processing unit 304 to write the assembled data to the storage medium. Optionally, the data assembly unit 384 also releases one or more resources from which the DTU was not allocated to the resource manager (390, 392, and/or 394) based on the record in the DTU.
Fig. 4A shows a schematic diagram of a downstream data path according to yet another embodiment of the present application.
The cache management unit 410, the address mapping unit 412, the array assembly unit 414, the log unit 416, and the garbage collection unit 418 are respectively different task processing units. The downstream data path shown in fig. 4A includes a plurality of channels (420, 421, 422, 424, 426, and 428) and a plurality of task processing units (cache management unit 410, address mapping unit 412, data assembling unit 414, log unit 416, and garbage collection unit 418). The downstream data path shown in fig. 4A is used to implement the functionality of the memory device. The memory device further includes a command transmission unit 402 and a subcommand processing unit 404.
By way of example, the command transmission unit 402 adds a DTU to the channel 420, and the DTU carrying the subcommand is provided to the cache management unit 410 via the channel 420. The DTU processed by the cache management unit 410 is added to the channel 422. Address mapping unit 412 retrieves the DTU from channel 422 and adds the processed DTU to channel 424. The data assembly unit 414 obtains the DTU from the channel 424 and accesses the storage medium through the subcommand processing unit 404 according to subcommands carried by the DTU.
The data assembly unit 414 also generates a DTU to be added to the channel 426. The log unit 416 obtains the DTU from the channel 416, generates a log to be recorded from the DTU, and adds the processed DTU to the channel 428. The data assembly unit 414 also obtains the DTU from the channel 428 and writes the log to the storage medium according to the subcommand carried by the DTU.
Garbage collection unit 418 generates a DTU indicating a subcommand of the garbage collection operation and adds to channel 421. The cache management unit also obtains the DTU from channel 421 and processes the subcommand therein.
Fig. 4B shows a schematic diagram of a downstream data path according to still another embodiment of the present application.
The downstream data path shown in fig. 4B includes a plurality of channels (420, 421, 422, 424, 426, 428, 431, 430, 432, and 434) and a plurality of task processing units (cache management unit 410, cache management unit 411, address mapping unit 412, address mapping unit 413, data assembling unit 414, log unit 416, and garbage collection unit 418).
Compared to the downstream data path shown in fig. 4A, the downstream data path shown in fig. 4B includes two cache management units and two address mapping units. The cache management unit 411 operates in parallel with the cache management unit 410. The address mapping unit 413 works in parallel with the address mapping unit 412. The downstream data path illustrated in fig. 4B thus allows for parallel processing of multiple subcommands provided by command transmission unit 402, enhancing subcommand processing capability.
By way of example, the command transmission unit 402 adds a DTU to the channel 420, and the DTU carrying the subcommand is provided to the cache management unit 410 via the channel 420. The DTU processed by the cache management unit 410 is added to the channel 422. Address mapping unit 412 retrieves the DTU from channel 422 and adds the processed DTU to channel 424. The data assembly unit 414 obtains the DTU from the channel 424 and accesses the storage medium through the subcommand processing unit 404 according to subcommands carried by the DTU.
The data assembly unit 414 also generates a DTU to be added to the channel 426. The log unit 416 obtains the DTU from the channel 416, generates a log to be recorded from the DTU, and adds the processed DTU to the channel 428. The data assembly unit 414 also obtains the DTU from the channel 428 and writes the log to the storage medium according to the subcommand carried by the DTU.
Garbage collection unit 418 generates a DTU indicating a subcommand of the garbage collection operation and adds to channel 421. The cache management unit also obtains the DTU from channel 421 and processes the subcommand therein.
According to the embodiment of the application, convenience is provided for enhancing the processing capacity of the task processing system. Referring to fig. 4B, task processing capacity is enhanced by providing a downstream data path with multiple task processing units in parallel coupled to the channels. Since the task processing units are schedulable, when, for example, the number of processor cores or threads of the control unit of the storage device increases, the increased task processing units on the downstream data path are utilized with the increased processor cores or threads, whereby the increased processor cores or threads are conveniently and fully utilized to process more subcommands in parallel. In some cases, finding the most partitioned and optimal resource allocation for each stage of processing is difficult due to the imbalance of the sub-command processing stages. For example, the workload of the cache management versus address mapping phase is heavier than log management. According to the embodiment of the application, the adjustment of the downlink data path is simplified, and the settings of different numbers of task processing units and/or channels are tested by adjusting the downlink data path so as to conveniently find the optimal or better downlink data path structure.
The DTU carries one or more subcommands and the downstream data path allocates one or more resources for processing the DTU. After completing the processing of the subcommand carried by the DTU, the various resources allocated for the DTU are released and various processing results corresponding to the various subcommands are delivered. Even with the same type of subcommand, there are multiple states of processing success/failure, etc. Thus, there are different ways of resource release, and/or ways of processing result identification and delivery for each DTU. Thus, different upstream data paths are required to correspond to different processing modes.
According to the embodiment of the application, in the process of processing the DTU by the downlink data path, an uplink data path is constructed for each DTU, and after the subcommand carried by the DTU is processed, the DTU is processed through the constructed uplink data path.
Fig. 5A shows a schematic diagram of an upstream data path according to an embodiment of the present application.
Referring to fig. 5A, the downstream data path includes, for example, a plurality of task processing units (510, 520, and 530), which further include one or more callback functions (512, 522, and 532). In the example of FIG. 5A, task processing unit 510 includes callback function 512, task processing unit 520 includes callback function 522, and task processing unit 530 includes callback function 532. The DTUs processed by the task processing unit 510 are provided to the task processing unit 520 via channels (shown simply by arrows) and the DTUs processed by the task processing unit are provided to the task processing unit 530 via channels.
The task processing unit of the downlink data path records the index of one or more callback functions in the processed DTU when processing the DTU. And thus, the DTU bearing sub-command is processed and completed, one or more callback function indexes are obtained from the DTU, and the callback functions are called to complete the processing of the DTU by an uplink data path. These callback functions thus constitute the upstream data path of the DTU or part thereof.
In the example of fig. 5A, the task processing unit 530 is the last task processing unit of the downstream data path. Which submits the subcommand carried by DTU 542 to a subcommand processing unit (not shown). The subcommand processing unit buffers the DTU 542, processes subcommands indicated by the DTU 542, and supplies the processing results of the subcommands to the monitoring unit 550.
The monitoring unit 550 monitors and recognizes whether the sub-command is processed to be completed. In response to completion of the subcommand processing, the monitoring unit 520 acquires the DTU 542 indicating the processing result of the subcommand. For example, the monitoring unit 550 receives the subcommand processing completion instruction sent by the subcommand processing unit, determines that the subcommand processing is completed. For another example, the monitoring unit 550 polls the subcommand processing unit, and if the subcommand completion indication is polled, then it is determined that subcommand processing is complete. In response, the monitoring unit 550 obtains the DTU 542 carrying the processed sub-command, obtains one or more callback function indices (e.g., callback functions 512, 522, and 532) therein from the DTU 542, and calls the callback functions indicated by the callback function indices in the specified order. By way of example, callback function 532 is used to free up resources allocated by task processing unit 530 for DTU 542, callback function 522 is used to free up resources allocated by task processing unit 520 for DTU 542, and callback function 512 is used to free up resources allocated by task processing unit 510 for DTU 542. The callback function 512 also provides the DTU 542 to a command transfer unit (not shown). Callback function 512 is the last callback function called in the upstream data path. The command transmission unit acquires the processing result of the subcommand according to the instruction of the DTU 542. The command transfer unit also releases DTU 542 so that DTU 542 can be used to carry other subcommands and provided to the downstream data path.
Optionally, the command transmission unit further combines the processing results of the plurality of sub-commands. These combined subcommands originate from the same command. In response to completion of processing of all of the plurality of subcommands generated by the same command, the command transmission unit returns the result of processing of the command to the command issuer.
Thus, in the example of fig. 5A, the callback function 537, the callback function 527, and the callback function 517 logically process the DTU 547 (indicated by the dashed arrow) in turn, and the monitoring unit 550 and the callback functions (512, 522, and 532) constitute the upstream data path of the DTU 542.
Thus, according to the embodiment of the application, one or more callback function indexes are recorded in the DTU, and the monitoring unit 550 calls the corresponding callback functions according to the callback function indexes in the DTU to be processed in a specified order, so that an uplink data path dedicated to the DTU is built for each DTU, and the built uplink data path is used for processing each DTU, so that different processing modes can be provided for each DTU in the uplink data path.
Fig. 5B shows a schematic diagram of an upstream data path according to yet another embodiment of the present application.
In the example of fig. 5B, the downstream data path includes three task processing units, namely a cache management unit 515, an address mapping unit 525, and a data assembling unit 535 (see also fig. 4A and 4B). The downstream data path also includes a resource manager that manages cache resources, a resource manager that manages map resources, and a resource manager that manages accelerator resources (the resource manager is not shown, only managed resources are shown).
The cache management unit 515 obtains the allocation of the cache unit to the DTU 547 from the cache resources for processing the DTU 547. The address mapping unit 525 allocates a mapping table resource (e.g., an entry of a lock mapping table) for the DTU 547 for recording an address of a storage medium carrying the write data. The data assembly unit 535 allocates accelerator resources for the DTU 547 (for calculating check data for write data) and submits the subcommand carried by the DTU 547 to a subcommand processing unit (e.g., a media interface controller) (not shown).
For example, when processing the DTU 547, the cache management unit 515 records the index of the callback function 517 in the DTU 547 being processed, the address mapping unit 525 records the index of the callback function 527 in the DTU 547, and the data assembling unit 535 records the index of the callback function 537 in the DTU 547. Callback function 517 is used, for example, to free up cache resources allocated to DTU 547. The callback function 527 is used, for example, to write the storage media address assigned to DTU 547 to and unlock a mapping table entry. The callback function 537 is used, for example, to record accelerator releases assigned to the DTU 547.
The monitoring unit 555 polls the subcommand processing unit to learn that the subcommand corresponding to the DTU 547 is processed. The monitoring unit 555 acquires indexes of callback functions (517, 527, and 537) recorded therein from the DTU 547, and calls the callback functions.
By way of example, callback functions (517, 527, and 537) use DTUs or variables recorded by DTUs as parameters to process the DTUs. Still by way of example, the monitoring unit 555 invokes callback functions 537, 527, and 517 in that order. The order in which callback functions are called is, for example, the reverse of the order in which they were added to DTU 547. Thus, each task processing unit adds an index of a callback function to the DTU 547 in a manner of operating the stack, and the monitoring unit 555 also obtains the index of the callback function from the DTU 547 in a manner of operating the stack and calls the corresponding callback function.
Fig. 5C illustrates a flow chart of constructing an upstream data path according to an embodiment of the present application.
One task processing unit of the downstream data path writes one or more callback function indices (570) to the DTU and passes the DTU to another task processing unit of the downstream data path. The other task processing unit also writes one or more callback function indices into the DTU (572). One or more callback function indexes are written in the DTU through one or more task processing units, and callback functions indicated by the callback function indexes form an uplink data path for processing the DTU.
After the subcommand carried by the DTU is processed by the subcommand processing unit, all callback function indexes recorded in the DTU are acquired, and callback functions indicated by the callback function indexes are sequentially called (574) so as to process the DTU through an uplink data path.
Optionally, the task processing unit selects a callback function index recorded in the DTU according to the processing performed on the subcommand carried by the DTU or the resources allocated to the subcommand, and the callback function corresponding to the callback function index is preset in the task processing unit. For example, an index corresponding to a callback function that will release the allocated resources is selected, or an index corresponding to a callback function that will handle a subcommand execution failure scenario is selected.
Optionally, one task processing unit adds one or more callback function indexes to the DTU. Still alternatively, one or more task processing units do not add any callback function index to the DTU it processes. Thus, the number of task processing units in the downstream data path is greater than, equal to, or less than the number of task processing units in the upstream data path. For example, the downstream data path has 5 task processing units, but in the process of processing the DTU, only the buffer management unit and the address mapping unit each write a callback function index into the DTU when processing the subcommand. At this time, there are two callback function indexes in the DTU. So that only two callback functions are included in the upstream data path for the DTU.
For example, after writing the index of the callback function 517 to the DTU 547, the callback function 517 of the cache management unit 515 becomes part of the upstream data path 505, i.e., the cache management unit 515 is part of both the downstream data path and the upstream data path. When the callback function 517 is called by the index of the callback function 517, the cache resource allocated by the resource manager at the time of DTU 547 is released by executing the callback function 517.
In yet another example, the cache management unit 515 does not write the index of the callback function 517 into the DTU 547 when processing the subcommand. While address mapping unit 525 writes an index of callback function 527 into DTU 547 when processing DTU 547, callback function 527 is used to release cache resources allocated by cache management unit 525 for DTU 547. After the DTU 547 is processed, the index of the callback function 527 recorded by the DTU 547 is called to execute the callback function 527, and the cache resource is released. Optionally, identification information of the allocated cache resource is also recorded in the DTU 547, and the specific cache resource is indicated with the identification information when the cache resource is released.
In some embodiments, the downstream data path includes a plurality of task processing units, each of the plurality of task processing units writing an index of the callback function in the DTU during processing of the subcommand carried by the DTU. The callback function index indicated by the callback function index written into the DTU by each task processing unit is the same or different. In one example, the cache management unit 515 requests allocation of cache resources when processing the DTU, and the address mapping conversion unit 525 and the data assembly unit 535 do not request cache resources when processing the DTU, so that the callback function index written by the cache management unit 515 to the DTU is different from the callback function index written by the address mapping conversion unit 525 and the data assembly unit 535 to the DTU, and still optionally, the callback function index written by the address mapping conversion unit 525 and the data assembly unit 535 to the DTU is the same.
The callback function through the uplink data path also returns the processing result of the subcommand to the command transmission unit.
Optionally, one or more callback function indexes in DTU 547 are ordered, invoking the callback functions indicated by those callback function indexes in order. One or more callback functions in the sequentially called DTU 547 constitute an upstream data path. By way of example, the order in which the one or more callback functions indicated by the callback function index in DTU 547 are invoked is the reverse order in which the callback function indexes are written to the DTU in the upstream data path.
Fig. 5D shows a schematic diagram of a DTU.
As shown in fig. 5, a callback function index is recorded in the DTU, and the callback function index indicates a callback function list including a callback function index a, a callback function index B, and a callback function index C. By way of example, callback function index a is written by cache management unit 515, callback function index B is written by address mapping unit 525, and callback function index C is written by data assembly unit 535. The callback function index A, the callback function index B and the callback function index C in the callback function list are ordered. In fig. 5D, the left side of the callback function list is the callback function index written to the DTU earlier than the right side. Optionally, in the upstream data path, 3 callback functions are sequentially called by the order of the callback function index C, the callback function index B, and the callback function index a in reverse order of the writing order in which the callback function index is written to the DTU.
In some embodiments, after the monitoring unit obtains the DTU, one or more callback functions are called by a callback function index in the callback function list. For example, the monitoring unit acquires the last callback function index in the callback function list as a callback function index C, and calls a callback function C1 through the callback function index C to execute the callback function C1. After the callback function C1 is executed, the monitoring unit continues to call the callback function B1 through the callback function index B. After the callback function B1 is executed, the monitoring unit 220 continues to call the callback function A1 through the callback function index a. By way of example, callback function A1 is the last callback function of the upstream data path, which also returns the DTU to the command transfer unit, which carries the result of the sub-command processing.
FIG. 6A illustrates a schematic diagram of resource management according to an embodiment of the application.
The task processing unit acquires and processes the DTU from the channel. In the process of processing the DTU, resources are requested from the resource manager for processing the subcommand carried by the DTU. The task processing unit records an identification of the allocated resource in the DTU to indicate that the DTU currently occupies the resource. The task processing unit also records a callback function index into the DTU, and the callback function indicated by the callback function index releases the resource when being executed. The task processing unit provides the processed DTU to other task processing units or to the subcommand processing unit through the channel.
Fig. 6B illustrates a schematic diagram of resource management according to yet another embodiment of the present application.
The downstream data path includes two task processing units (610, 612), two resource managers (620, 622). In fig. 6B, DTU 640, DTU 642 and DTU 644 show different phases of the same DTU. DTU 640 is shown as DTU 642 after being processed by task processing unit 610, and DTU 642 is shown as DTU 644 after being processed by task processing unit 612.
When the task processing unit 610 processes the DTU 640, allocation of the resource a is requested to the resource manager 620. For example, resource a represents a cache resource. The task processing unit 610 provides the DTU 642 to the task processing unit 612 and records the indexes of the allocated resource a and callback function A1 in the DTU 642. The callback function A1, when executed, releases the resource a to the resource manager.
The task processing unit 612 requests allocation of resource B to the resource manager 622 when processing the DTU 642. For example, resource B represents an accelerator resource. The task processing unit 612 generates a DTU 644 and records the index of the allocated resource B and the callback function B1 in the DTU 644. Callback function B1, when executed, releases resource B to the resource manager. So that the index of the allocated resource a and callback function A1 recorded in the DTU 644 is added by the task processing unit 610, and the index of the allocated resource B and callback function B1 is added by the task processing unit 612.
Fig. 6C illustrates a schematic diagram of resource management according to yet another embodiment of the present application.
The downstream data path includes three task processing units (650, 652 and 654), two resource managers (660, 662). In fig. 6C, DTU 670, DTU 672 and DTU 674 show different phases of the same DTU. Whereas DTU 670 and DTU680 represent different DTUs. DTU680 and DTU 682 represent different phases of the same DTU. DTU 670 is shown as DTU 672 after being processed by task processing unit 650, and DTU 672 is shown as DTU 674 after being processed by task processing unit 652. DTU680 is processed by task processing unit 654 and is shown as DTU 682.
When the task processing unit 650 processes the DTU 670, it requests allocation of the resource a to the resource manager 660. For example, resource a represents a cache resource. Task processing unit 650 provides DTU 672 to task processing unit 652 and records the allocated resource a in DTU 672. The task processing unit 650 also records a callback function index in the DTU 672, but is not used to release resource a when the callback function indicated by the callback function index is executed, for example.
Task processing unit 654, when processing DTU 680, requests resource manager 660 to allocate resource a'. Resource a 'and resource a are homogeneous resources (e.g., cache resources), but resource a' and resource a represent different instances of such resources, respectively. The task processing unit 654 records the allocated resource a and the callback function index in the DTU 682, and the callback function indicated by the callback function index is not used to release the resource a' when executed.
The resource manager manages allocation of resources. For example, resource manager 660 ensures that an instance of a resource (e.g., resource a) is not allocated to both DTU 672 and DTU 682. For example, resource manager 660 maintains a lock for each resource instance to ensure that one resource instance is only routed, and may include multiple task processing units with the same functionality and/or using the same resources that request resources through the same resource manager.
Still referring to FIG. 6C, the callback function that task processing unit 650 adds to DTU 672 is not used to free up resource A that it requested for DTU 672. It means that the allocation and release of the same resource does not have to be in charge of the same task processing unit (but may be in charge of another task processing unit), leading to flexibility in task processing. It will be appreciated that it is also possible that the same task processing unit is responsible for freeing up its allocated resources.
Task processing unit 652 requests allocation of resource B to resource manager 622 when processing DTU 672. Task processing unit 652 generates DTU 674 and records in DTU 644 the index of allocated resource B with callback function B1. Callback function B1, when executed, releases resource B to the resource manager. The task processing unit 652 also records the index of the callback function A1 in the DTU 674. The callback function A1, when executed, releases the resource a to the resource manager.
FIG. 7 illustrates a block diagram of a storage device constructed using a task processing system in accordance with an embodiment of the application.
A task processing system according to an embodiment of the present application is used to construct a storage device and is implemented by a control section such as a storage device.
The task processing system comprises a command transmission unit, a data path and a subcommand processing unit. The subcommand processing unit is coupled to the storage medium.
The command transmission unit exchanges IO commands with the host according to the designated storage protocol. The command transmission unit splits the IO command into one or more subcommands, assigns the DTU to carry the subcommand, and delivers the DTU to the data path.
The data path includes a downstream data path and an upstream data path. The downlink data path processes the subcommand in one or more stages, and finally delivers the DTU to the subcommand processing unit. The subcommand processing unit converts subcommands carried by the DTU into commands for accessing the storage medium. By way of example, the subcommand processing unit is a media interface controller. The subcommand processing unit also refers to the storage medium to acquire a processing result of a command to access the storage medium. The uplink data path delivers the processing result of the subcommand to the command transmission unit. By way of example, the upstream data path polls the subcommand processing unit to obtain the processing results of the subcommand. If necessary, the command transmission unit will collect the sub-commands split by the same IO command into one or more sub-commands, and after all sub-commands split by the same IO command are processed, the processing result of the IO command is provided to the host.
According to embodiments of the present application, facilities are provided for supporting virtualization in a storage device. For example, the NVMe protocol defines a NameSpace (NS for short). The namespace exposes a virtual storage device or a logical storage device to a host accessing the storage device. Thus by providing multiple namespaces on a single control unit, each namespace provides virtualized storage devices to the host. As yet another example, multiple virtual storage devices accessed by different storage protocols are provided simultaneously by a single control component of the storage device, e.g., storage devices that support NVMe protocol, open channel (OpenChannel) protocol, and/or SATA protocol simultaneously.
Resources are also allocated for each virtual storage device. For example, it is advantageous that the cache resource, the storage medium resource, and the accelerator resource be shared for each virtual storage device, and that the map resource be exclusive for each virtual storage device.
According to embodiments of the present application, the implementation of the various requirements described above is facilitated.
FIG. 8A illustrates a block diagram of a storage device constructed using a task processing system according to yet another embodiment of the present application.
According to the embodiment of FIG. 8A, the storage device exposes multiple namespaces (NS 0, NS1, NS2, and NS3, respectively) such as the NVMe protocol to the host. The host has access to the virtual storage device provided by each namespace according to the NVMe protocol.
A task processing system according to an embodiment of the present application is used to construct the storage device illustrated in fig. 8A and is implemented by a control section such as the storage device.
Examples of task processing systems include a command transmission unit, a plurality of data paths (810, 812, 814, and 816), and a subcommand processing unit. The subcommand processing unit is coupled to the storage medium. Each of the plurality of data paths (810, 812, 814, 816) is for providing one of the namespaces. By way of example, data path 810 provides namespace NS0, data path 812 provides namespace NS1, data path 814 provides namespace NS2, and data path 816 provides namespace NS3.
The command transmission unit splits the IO command provided by the host into subcommands, and provides the subcommands split from the IO command for the data paths corresponding to the namespaces according to the namespaces accessed by the IO command. For example, the IO command accesses namespace NS2, then the command transfer unit provides all subcommands split from the IO command to data path 814, and all IO commands accessing namespace NS2 are processed by data path 814.
Thus, according to embodiments of the present application, a storage device is facilitated to provide multiple namespaces by replicating a data path.
Optionally, each namespace is provided with its exclusive mapping table resources, as well as other resources (storage medium resources, accelerator resources, etc.) that are shared. Thus, the task processing system provides, for example, 4 resource managers for managing the mapping table resources, each resource manager for managing the mapping table resources being coupled to one of the data paths and managing only the mapping table resources corresponding to the namespaces associated with the data paths to which it is coupled, thereby enabling efficient isolation of the mapping table resources between namespaces. And the task processing system also provides each data path with its dedicated resource manager that manages other types of resources. Taking the storage medium resource manager for managing storage media as an example, each storage medium resource manager coupled to each data path manages, for example, all storage media of the storage device such that each data path can carry data written by a subcommand with any available storage medium of the storage device, improving utilization of storage medium resources and facilitating global wear leveling of the storage device.
It will be appreciated that there may be a variety of other correspondences between the datapath, the resource manager, and the various resources being managed in embodiments in accordance with the present application. For example, each data path may be provided with exclusive storage medium resources to mitigate the interaction between the data paths. Importantly, in accordance with the task processing system of the present application, the data path is conveniently replicated, the resource manager is conveniently coupled to the data path, and the resources of the storage device are conveniently managed by the resource manager. Thus, development of new functions of the storage device is expedited.
FIG. 8B illustrates a block diagram of a storage device constructed using a task processing system in accordance with yet another embodiment of the present application.
According to the embodiment of fig. 8B, the storage device exposes the host with the functions of various devices, such as an NVMe storage device, an Open Channel (OC) storage device, an accelerator with specified functions, an MCTP (management component transport protocol ) endpoint, a management queue (AdminQueue) of the NVMe device, and so on. The host has access to the functionality provided by each device according to different protocols.
A task processing system according to an embodiment of the present application is used to construct a storage device illustrated in fig. 8B and is implemented by a control section such as a storage device.
Examples of task processing systems include a command transmission unit, a plurality of data paths (820, 822, 824, and 826), and a plurality of subcommand processing units (380, 386, and 839). The subcommand processing unit 830 is coupled to a storage medium. The subcommand processing unit 893 is coupled to an accelerator (e.g., an accelerator that performs encryption/decryption calculations according to the AES/SM4 standard)
By way of example, data path 820 provides functionality for NVMe storage devices that handle IO commands of the NVMe protocol, and data path 822 provides functionality for OC storage devices that handle IO commands of the OC protocol. The data path 824 processes management commands of the MCTP protocol or management commands of the NVMe protocol. The data path 826 handles access requests to the accelerator.
The command transmission unit forwards the command to the corresponding data path according to the protocol used by the command. Optionally, for IO commands, the first task processing unit of data path 820 and/or data path 822 performs splitting from IO commands to subcommands. The IO command is to access a storage setting of the storage device. The single subcommand processing unit 830 serves the data path 820 and the data path 822 such that all of the storage media of the storage devices can be used by both the NVMe storage devices provided by the data path 820 and the OC storage devices provided by the data path 822, thereby improving the utilization of the storage media.
Still alternatively, the command transmission unit splits the IO command into sub-commands, assigns DTUs to the sub-commands, and provides the sub-commands to data paths corresponding to the same protocol according to the protocol thereof. The command transmission unit also allocates DTUs for commands for managing commands and accessing accelerators. The DTU is used to carry various commands to be processed in the data path by one or more task processing units. By way of example, the command transfer unit does not have to split the management command and/or the command to access the accelerator into subcommands, but rather the DTU carries the management command and/or the command to access the accelerator.
For management commands (e.g., following MCTP protocol or NVMe protocol), which query or set the status of the device, are processed by data path 824 and subcommand processing unit 836. Upon a management command query, such as the amount of available space of the storage device, the subcommand processing unit 836 obtains the use status of the storage medium from the memory without being associated with dedicated hardware. Upon management command querying, e.g., device temperature, the subcommand processing unit 836 is coupled, e.g., to a temperature sensor (not shown) to obtain temperature information.
The accelerator to which the subcommand processing unit 839 is coupled is an accelerator that performs encryption/decryption calculations in accordance with the AES/SM4 standard, for example. Optionally, the resource managers of data path 820 and data path 822 encapsulate accelerators as resources and are used by the task processing units of data processing units 820/830. The data path 826 and subcommand processing unit 839 then present the accelerator as a device providing the relevant services so that the host can directly use the accelerator.
Thus, according to embodiments of the present application, the storage device is facilitated to provide the functionality of a variety of virtual devices by creating a variety of data paths.
Although the examples referred to in the present application are described for illustrative purposes only and not to be limiting of the application, modifications, additions and/or deletions to the embodiments may be made without departing from the scope of the application.
The foregoing is merely illustrative of the present application, and the present application is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.