Disclosure of Invention
In order to solve the above problems, the present application provides a control method, device and medium based on intelligent artificial limb, including:
In a first aspect, the present application proposes a control method based on an intelligent prosthesis, comprising: the intelligent artificial limb acquires the electromyographic signals of the stump of the patient through the electromyographic signal acquisition device, and processes and analyzes the electromyographic signals to obtain an analysis result; determining a first action mode according to the analysis result, wherein the first action mode at least comprises one of the following steps: a bulb grasping mode, a column grasping mode, and a fingertip pinching mode; collecting an environment image and a human eye image through an image collecting device on the intelligent artificial limb, and determining an action execution target in the environment image according to the human eye image; performing recognition processing on the action execution target through a pre-trained first recognition model to obtain a first recognition result, and determining a second action mode according to the first recognition result, wherein the second action mode at least comprises one of the following steps: side grabbing, upper grabbing and bottom grabbing; and carrying out recognition processing on the action execution target through a pre-trained second recognition model to obtain a second recognition result, and determining action execution strength according to the second recognition result.
In one example, the identifying process is performed on the action execution target through a first pre-trained identifying model to obtain a first identifying result, and a second action mode is determined according to the first identifying result, which specifically includes: inputting an environment image containing the action execution target into a pre-trained first recognition model, so as to extract the graphic characteristics of the action execution target through the first recognition model and the contact surface characteristics surrounding the action execution target, wherein the contact surface characteristics are the appearance characteristics of a contact object for supporting or dragging the action execution target; constructing a three-dimensional image corresponding to the action execution target according to the graphic features, and acquiring appearance parameters of the three-dimensional image; determining a selection range of a second action mode according to the appearance parameters and the pre-stored hand parameters of the intelligent artificial limb, wherein the selection range comprises: side grabbing, upper grabbing and bottom grabbing; and selecting an optimal execution mode as the second action mode in the selection range according to the contact surface characteristics.
In one example, the identifying process is performed on the action execution target through a pre-trained second identifying model to obtain a second identifying result, and the action execution strength is determined according to the second identifying result, which specifically includes: inputting an environment image containing the action execution target to a pre-trained second recognition model to extract characteristic parameters of the action execution target through the second recognition model; according to the characteristic parameters, inquiring article parameters with similarity reaching a preset threshold value with the characteristic parameters in a database, wherein the article parameters at least comprise one of the following: type of article, weight of article the material of the article; and analyzing and calculating the object parameters to obtain the minimum grabbing static friction force of the action execution target, and taking the minimum grabbing static friction force as the action execution force.
In one example, after analyzing and calculating the item parameter to obtain the minimum grabbing static friction force of the action execution target and taking the minimum grabbing net friction force as the action execution force, the method further includes: according to the minimum grabbing static friction force and the pre-stored force adjustment grade number, a force adjustment value is obtained; according to the first action mode, the second action mode and the action execution force, the action execution target is tried to be grabbed; the sliding signals are collected in real time through the sliding sensor of the intelligent prosthetic hand, and whether sliding exists or not is judged according to the sliding signals; if not, keeping the execution force of the action unchanged; if yes, the number of the gears is adjusted according to the pre-stored force, the force adjusting value is used for increasing and adjusting the action execution force step by step until the sliding signal judges that sliding does not exist.
In one example, the image acquisition device on the intelligent artificial limb acquires an environment image and a human eye image, and determines an action execution target in the environment image according to the human eye image, which specifically comprises: collecting human eye images of a patient through an image collecting device on the intelligent artificial limb, obtaining a cross section and a vertical section of an eyeball of the patient according to the human eye images, and taking the extending direction of the cross section as a gazing direction; constructing a reference shaft parallel to the vertical section by taking the position of the image acquisition device as an origin; obtaining a target acquisition direction of the image acquisition device according to the gazing direction and the reference axis, and controlling the image acquisition device to rotate to the target acquisition direction; and acquiring an environment image at the target acquisition direction by the image acquisition device, and taking an object target which coincides with the center point of the environment image as an action execution target.
In one example, determining the first action mode according to the analysis result specifically includes: according to the analysis result, inquiring an action mode with the highest matching degree with the analysis result in a database through a pre-trained fuzzy matching model, wherein the action mode at least comprises one of the following steps: a bulb grasping mode, a column grasping mode, and a fingertip pinching mode; and taking the action mode with the highest matching degree as a first action mode.
In one example, the intelligent artificial limb collects the electromyographic signals of the stump of the patient through the electromyographic signal collection device, and processes and analyzes the electromyographic signals, and before the analysis result is obtained, the method further comprises: the intelligent artificial limb acquires the action intention of a patient and acquires a plurality of training electromyographic signals at the stump of the patient according to the action intention, wherein the action intention at least comprises one of the following: ball grabbing, column grabbing and fingertip pinching; inputting the training electromyographic signals and the corresponding action intentions into an initial fuzzy matching model to perform fuzzy rule training to obtain a plurality of fuzzy rules; and constructing a fuzzy matching model according to the fuzzy rules.
In one example, after performing recognition processing on the action execution target through a pre-trained second recognition model to obtain a second recognition result and determining action execution strength according to the second recognition result, the method further includes: obtaining corresponding bioelectric strength according to the action execution force; simulating a stimulation current consistent with the bioelectric strength through an electrode unit on the intelligent artificial limb; the stimulation current is directed to the patient at the stump.
In another aspect, the present application provides a control device based on an intelligent prosthesis, including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the following instructions: collecting an electromyographic signal at the residual limb of a patient through an electromyographic signal collecting device, and processing and analyzing the electromyographic signal to obtain an analysis result; determining a first action mode according to the analysis result, wherein the first action mode at least comprises one of the following steps: a bulb grasping mode, a column grasping mode, and a fingertip pinching mode; collecting an environment image and a human eye image through an image collecting device on the intelligent artificial limb, and determining an action execution target in the environment image according to the human eye image; performing recognition processing on the action execution target through a pre-trained first recognition model to obtain a first recognition result, and determining a second action mode according to the first recognition result, wherein the second action mode at least comprises one of the following steps: side grabbing, upper grabbing and bottom grabbing; and carrying out recognition processing on the action execution target through a pre-trained second recognition model to obtain a second recognition result, and determining action execution strength according to the second recognition result.
In another aspect, the present application provides a non-volatile computer storage medium storing computer-executable instructions configured to: collecting an electromyographic signal at the residual limb of a patient through an electromyographic signal collecting device, and processing and analyzing the electromyographic signal to obtain an analysis result; determining a first action mode according to the analysis result, wherein the first action mode at least comprises one of the following steps: a bulb grasping mode, a column grasping mode, and a fingertip pinching mode; collecting an environment image and a human eye image through an image collecting device on the intelligent artificial limb, and determining an action execution target in the environment image according to the human eye image; performing recognition processing on the action execution target through a pre-trained first recognition model to obtain a first recognition result, and determining a second action mode according to the first recognition result, wherein the second action mode at least comprises one of the following steps: side grabbing, upper grabbing and bottom grabbing; and carrying out recognition processing on the action execution target through a pre-trained second recognition model to obtain a second recognition result, and determining action execution strength according to the second recognition result.
The control method, the control equipment and the control medium based on the intelligent artificial limb provided by the application have the following beneficial effects: carry out accurate discernment to snatching the target, simultaneously, can control intelligent artificial limb to the different target of snatching and take different dynamics of snatching, avoided the damage of fragile article because of snatching the dynamics and high and lead to, simultaneously, carry out real-time adjustment to snatching the dynamics, also avoided snatching the dropping of target. In addition, the corresponding stimulation electric signals are sent to the patient according to the grabbing force, so that the patient can know the current grabbing force.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be clearly and completely described below with reference to specific embodiments of the present application and corresponding drawings. It will be apparent that the described embodiments are only some, but not all, embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
It should be noted that, in the control method based on the intelligent artificial limb according to the present application, the control method may be stored in a corresponding system in a program or algorithm mode, the system may be installed in the intelligent artificial limb, and in order to support the operation of the system, the intelligent artificial limb should be equipped with corresponding elements, such as a processor, a memory, a communication module, etc., so as to implement support for the program or algorithm. Meanwhile, other hardware should be included in the intelligent artificial limb to support all the technical schemes described in the application, for example: an electromyographic signal acquisition device, an image acquisition device, a sliding sense sensor, an electrode unit and the like.
The following describes in detail the technical solutions provided by the embodiments of the present application with reference to the accompanying drawings.
As shown in fig. 1, a control method based on an intelligent artificial limb provided by an embodiment of the application includes:
S101: the intelligent artificial limb collects the electromyographic signals of the stump of the patient through the electromyographic signal collecting device, and processes and analyzes the electromyographic signals to obtain an analysis result.
The intelligent artificial limb is connected with the residual limb of the patient, has an myoelectric induction function, is provided with a myoelectric signal acquisition device, can be attached to the external skin of the residual limb of the patient, and can acquire corresponding myoelectric signals on the residual limb when the patient generates certain grabbing consciousness.
Further, a processor is arranged in the intelligent artificial limb, and can correspondingly analyze and process the collected electromyographic signals, and the electromyographic signals are subjected to signal amplification processing through an amplifier before being processed, so that the strength of the electromyographic signals is ensured.
Further, an analysis result was obtained.
S102: determining a first action mode according to the analysis result, wherein the first action mode at least comprises one of the following steps: a bulb grabbing mode, a column grabbing mode and a fingertip pinching mode.
Specifically, according to the analysis result, the intelligent artificial limb queries an action mode with the highest matching degree with the analysis result in the database through a pre-trained fuzzy matching model, and the action mode at least comprises one of the following steps: a bulb grabbing mode, a column grabbing mode and a fingertip pinching mode.
Further, the operation mode having the highest matching degree is set as the first operation mode, that is, any one of the ball gripping mode, the column gripping mode, and the fingertip gripping mode.
It should be noted that, the intelligent artificial limb may be provided with a memory as a database, and the intelligent artificial limb may also establish a wireless connection with the remote server and the database through the communication module, so as to achieve access to the database, and the specific implementation form is not specifically limited in the embodiment of the present application.
In addition, when a patient consciously uses a certain grabbing mode, a specific myoelectric signal is generated at the stump, however, the generated myoelectric signals are not completely consistent for the same grabbing mode, so in the embodiment of the application, a fuzzy matching model is introduced, and the scheme can be realized.
S103: and acquiring an environment image and a human eye image through the image acquisition device on the intelligent artificial limb, and determining an action execution target in the environment image according to the human eye image.
Specifically, as shown in fig. 2, the intelligent prosthesis 10 is provided with an image acquisition device 20, where the image acquisition device 20 includes a camera and a rotating device, and the camera can rotate according to the rotating device to acquire images at different angles.
It should be noted that, the image acquisition device 20 herein can automatically track the position of the eyeball 30 of the patient after the intelligent artificial limb determines the first action mode according to the analysis result, that is, the position of the eyeball 30 is determined by a 360-degree rotation mode, and the human eye image can be acquired after the camera is determined to be aligned with the eyeball 30.
Further, the intelligent prosthesis 10 collects an image of the eye of the patient, that is, an image of the eyeball 30, through the image collecting device 20 on the intelligent prosthesis, and the collecting mode can adopt characteristic identification of the eyeball so as to align the camera.
It should be noted that, the intelligent artificial limb 10 should be internally provided with a processing unit with corresponding computing power, which may store a corresponding program for identifying the eyeball characteristics, that is, the processing unit controls the image acquisition device 20, and after determining that the camera is aligned to the eyeball 30, the processing unit controls the rotation device to stop rotating. In addition, if the processing unit does not recognize the eyeball 30 after controlling the rotating device to rotate 360 degrees, for example, the arm of the patient is currently in a sagging state, and the eyeball 30 cannot be recognized, the patient needs to lift the residual limb in a voice prompt manner through the voice prompt device arranged on the intelligent artificial limb 10, and then the position of the intelligent artificial limb 10 is driven to lift until the human eye image is acquired in the manner, or the intelligent artificial limb 10 is automatically lifted for a certain distance under the control of the processing unit until the human eye image is acquired in the manner.
It should be further noted that, the above technical solution relies on the processing unit inside the intelligent artificial limb 10 to perform corresponding processing calculation, and in addition, connection with the remote server may be established through the communication module disposed inside the intelligent artificial limb 10, so as to support and implement the above technical solution by using the calculation force of the remote server.
Further, the cross section 40 and the vertical section 50 of the eyeball of the patient are obtained from the human eye image, and the extending direction of the cross section 40 is taken as the gazing direction, and the extending direction here is the forward looking direction of the eyeball, and the extending direction is the direction to intersect with the action execution target 60.
Further, the intelligent prosthesis constructs a reference axis 70 parallel to the vertical section with the position of the image acquisition device as the origin.
Furthermore, according to the gazing direction (i.e. the extending direction of the cross section 40) and the reference axis 70, the target acquiring direction 80 of the image acquiring apparatus is obtained, i.e. the value of the rotation angle 90 is calculated by taking the reference axis as the center point, and the camera is rotated according to the rotation angle 90 to align with the target acquiring direction 80, and according to the reference axis 70 and the gazing direction (i.e. the extending direction of the cross section 40), the rotation angle 90 can be calculated by a trigonometric function.
Further, the image pickup device 20 picks up the environmental image in the object pickup direction 80, and sets the object overlapping with the center point of the environmental image as the operation execution object. That is, when the image capturing device 20 is rotated to the target capturing direction 60, the direction in which the camera is aligned intersects the patient's gaze direction, and there is a certain action execution target at the center of the captured environmental image.
S104: performing recognition processing on the action execution target through a pre-trained first recognition model to obtain a first recognition result, and determining a second action mode according to the first recognition result, wherein the second action mode at least comprises one of the following steps: side grabbing, upper grabbing and bottom grabbing.
Specifically, the intelligent prosthesis inputs an environmental image containing the action execution target to a first recognition model trained in advance to extract the graphic features of the action execution target and the contact surface features surrounding the action execution target through the first recognition model.
Note that the graphic features, that is, various types of elements constituting the action execution target, include, but are not limited to: circular arc, circular, rectangular, trapezoid, etc. Interface features, i.e., appearance features of the supportive article holding or pulling the motion-performing object, include, but are not limited to: table tops, pull ropes, etc. I.e. the contact surface features are the appearance features of the contact object that holds or pulls the motion execution target.
Further, a three-dimensional image corresponding to the action execution target is constructed according to the graphic features, and appearance parameters of the three-dimensional image are acquired, wherein the appearance parameters include, but are not limited to: length, width, height, diameter, cross-sectional area, etc.
Further, according to the appearance parameters and pre-stored hand parameters of the intelligent artificial limb, determining a selection range of the second action mode, wherein the selection range comprises: side grabbing, upper grabbing and bottom grabbing. For example, the action execution target is determined to be a cylinder based on the appearance parameters, while it is determined to be able to surround all or part of holding the cylinder based on the hand parameters, and thus the selection range at this time may include all of the above-described modes.
Further, an optimal execution mode is selected as the second operation mode within the selection range according to the contact surface characteristics. For example, when the cylinder is identified to be placed on the table surface according to the contact surface characteristics, the bottom grabbing can be filtered at this time, and for grabbing of the cylinder, the stability of grabbing of the side edge is far higher than that of grabbing of the upper side, and the side edge grabbing can be selected as the second action mode according to the preset grabbing for the type of article.
S105: and carrying out recognition processing on the action execution target through a pre-trained second recognition model to obtain a second recognition result, and determining action execution strength according to the second recognition result.
Specifically, the intelligent artificial limb inputs an environment image containing the action execution target to a pre-trained second recognition model to extract characteristic parameters of the action execution target through the second recognition model.
Further, according to the characteristic parameters, inquiring the object parameters with similarity reaching a preset threshold value from a database and/or the internet, wherein the object parameters at least comprise one of the following: type of article, weight of article the material of the article.
And further, analyzing and calculating the object parameters to obtain the minimum grabbing static friction force of the action execution target, and taking the minimum grabbing static friction force as the action execution force.
The above-mentioned item parameters may reflect the relevant parameters of the action execution target.
In addition, by adopting two recognition models, namely a first recognition model and a second recognition model, two data processing can be operated in parallel, and the reaction time of the intelligent artificial limb is reduced.
In addition, in one embodiment, after the analysis and calculation are performed on the item parameters to obtain the minimum grabbing friction force of the action execution target, and the minimum grabbing static friction force is used as the action execution force, the method can further comprise the following steps:
And adjusting the number of grades according to the minimum grabbing static friction force and the pre-stored force to obtain a force adjusting value. For example, the number of pre-stored force adjustment gears is 5, and the minimum grabbing static friction force is 10N, at this time, the minimum grabbing static friction force can be equally divided according to the force adjustment gears, and the force adjustment value is 2N.
Further, the operation execution target is tried to be grasped according to the first operation mode, the second operation mode and the operation execution force.
And the sliding signals are acquired in real time through the sliding sensor of the intelligent artificial limb hand, and whether sliding exists or not is judged according to the sliding signals.
If not, keeping the action execution force unchanged, and grabbing the action execution target to the front of the eyes of the patient.
If yes, it is indicated that there is sliding at this time, it is necessary to increase the action execution force. Namely, the number of gears is adjusted according to the pre-stored dynamics, and the dynamics adjustment value is used for increasing and adjusting the dynamics of the action execution step by step until no sliding exists according to the sliding signal. For example, the current action execution force is 10N, and every time 2N is added, the current action execution force is increased by 5 times at most until the current action execution force is not slid any more.
Through this technical scheme, avoided some fragile article because snatch the cracked that the dynamics is too big leads to, simultaneously, guaranteed again can not drop because of snatching the dynamics too little and leading to the article.
In one embodiment, the intelligent artificial limb collects the electromyographic signals of the stump of the patient through the electromyographic signal collection device, processes and analyzes the electromyographic signals, and before obtaining the analysis result, the intelligent artificial limb further comprises:
The intelligent artificial limb acquires the action intention of a patient and acquires a plurality of training electromyographic signals at the stump of the patient according to the action intention, wherein the action intention at least comprises one of the following: ball grabbing, column grabbing and fingertip pinching.
It should be noted that the action intention may be instructed by the patient's voice or other means to cause the intelligent prosthesis to acquire the action intention, and at the same time, the action intention should include a specific type, i.e., a type of intelligent prosthesis support.
And the intelligent artificial limb inputs a plurality of training electromyographic signals and corresponding action intentions into the initial fuzzy matching model to perform fuzzy rule training so as to obtain a plurality of fuzzy rules.
Further, a fuzzy matching model is constructed according to a plurality of fuzzy rules. The corresponding first action mode can be determined through the electromyographic signals of the patient and the fuzzy matching model.
In one embodiment, summarizing, performing recognition processing on the action execution target through a pre-trained second recognition model to obtain a second recognition result, and determining the action execution strength according to the second recognition result, further including:
and obtaining the corresponding bioelectric strength according to the action execution force.
Furthermore, the stimulating current consistent with the bioelectric strength is simulated through the electrode unit on the intelligent artificial limb.
Further, a stimulation current is introduced at the stump of the patient. For example, the stimulation current may vary from 0.05A to 0.25A.
Through the technical scheme, the patient can sense the strength of the stimulation current at the stump so as to sense the execution strength of the action.
In one embodiment, as shown in fig. 3, the present application further provides a control device based on an intelligent artificial limb, including:
At least one processor; and
A memory communicatively coupled to the at least one processor; wherein,
The memory stores instructions executable by the at least one processor to enable the at least one processor to perform the following instructions:
Collecting an electromyographic signal at the residual limb of a patient through an electromyographic signal collecting device, and processing and analyzing the electromyographic signal to obtain an analysis result;
Determining a first action mode according to the analysis result, wherein the first action mode at least comprises one of the following steps: a bulb grasping mode, a column grasping mode, and a fingertip pinching mode;
collecting an environment image and a human eye image through an image collecting device on the intelligent artificial limb, and determining an action execution target in the environment image according to the human eye image;
Performing recognition processing on the action execution target through a pre-trained first recognition model to obtain a first recognition result, and determining a second action mode according to the first recognition result, wherein the second action mode at least comprises one of the following steps: side grabbing, upper grabbing and bottom grabbing;
And carrying out recognition processing on the action execution target through a pre-trained second recognition model to obtain a second recognition result, and determining action execution strength according to the second recognition result.
In one embodiment, the present application also provides a non-volatile computer storage medium storing computer-executable instructions configured to:
Collecting an electromyographic signal at the residual limb of a patient through an electromyographic signal collecting device, and processing and analyzing the electromyographic signal to obtain an analysis result;
Determining a first action mode according to the analysis result, wherein the first action mode at least comprises one of the following steps: a bulb grasping mode, a column grasping mode, and a fingertip pinching mode;
collecting an environment image and a human eye image through an image collecting device on the intelligent artificial limb, and determining an action execution target in the environment image according to the human eye image;
Performing recognition processing on the action execution target through a pre-trained first recognition model to obtain a first recognition result, and determining a second action mode according to the first recognition result, wherein the second action mode at least comprises one of the following steps: side grabbing, upper grabbing and bottom grabbing;
And carrying out recognition processing on the action execution target through a pre-trained second recognition model to obtain a second recognition result, and determining action execution strength according to the second recognition result.
The embodiments of the present application are described in a progressive manner, and the same and similar parts of the embodiments are all referred to each other, and each embodiment is mainly described in the differences from the other embodiments. In particular, for the apparatus and medium embodiments, the description is relatively simple, as it is substantially similar to the method embodiments, with reference to the section of the method embodiments being relevant.
The devices and media provided in the embodiments of the present application are in one-to-one correspondence with the methods, so that the devices and media also have similar beneficial technical effects as the corresponding methods, and since the beneficial technical effects of the methods have been described in detail above, the beneficial technical effects of the devices and media are not repeated here.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and variations of the present application will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. which come within the spirit and principles of the application are to be included in the scope of the claims of the present application.