Detailed Description
In order to make the objects, technical solutions and advantages of the present disclosure more apparent, exemplary embodiments according to the present disclosure will be described in detail with reference to the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present disclosure and not all of the embodiments of the present disclosure, and that the present disclosure is not limited by the example embodiments described herein.
The invention is described in further detail below with reference to the drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting thereof. It should be further noted that, for convenience of description, only some, but not all of the structures related to the present invention are shown in the drawings.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. For example, the first color and the second color are only used to distinguish two different colors.
Fig. 1 is a schematic view illustrating a scene to which a mapping method of a three-dimensional image according to an embodiment of the present disclosure is applied. As shown in fig. 1, the user may freely select (e.g., by a painting operation on the original three-dimensional image) an area to be mapped, generate a corresponding mapping according to the user's needs, and, for example, may generate a mapping according to a keyword or description "i want an xxxx mapping" entered by the user, and generate a mapping through an artificial intelligence model, so as to paste the mapping generated to satisfy the user's needs into the mapping area freely selected by the user.
The mapping processing method can be applied to a hardware environment formed by a server and a terminal device. The platform may be used to provide services to a terminal device or an application installed on a terminal device, which may be a video application, an instant messaging application, a browser application, an educational application, a gaming application, etc. The database may be provided on or separate from the server for providing data storage services for the server, such as a game data storage server, which may include, but is not limited to: a wired network, a wireless network, wherein the wired network comprises: local area networks, metropolitan area networks, and wide area networks, the wireless network comprising: bluetooth, WIFI and other networks for realizing wireless communication, wherein the user equipment mainly comprises terminal equipment, display equipment and input equipment, and the terminal equipment can be a terminal provided with an application program and can comprise at least one of the following components, but not limited to: the mobile phone (such as an Android mobile phone, an iOS mobile phone, etc.), a notebook computer, a tablet computer, a palm computer, an MID (Mobile Internet Devices, mobile internet device), a PAD, a desktop computer, an intelligent television, an intelligent voice interaction device, an intelligent household appliance, a vehicle-mounted terminal, an aircraft, etc., the input device can be a mouse, a VR handle, a scanner, a light pen, etc., the display device can be an LED display screen, and the server can be a single server, a server cluster formed by a plurality of servers, or a cloud server. The specific type of the above device is not limited herein, and the user can select the device according to the actual situation. The terminal device is connected with the display device and the input device, and the connection mode can comprise wired communication connection and wireless communication connection.
Fig. 2 is a flowchart illustrating a mapping method of a three-dimensional image according to an embodiment of the present disclosure, as shown in fig. 2, the mapping method of the three-dimensional image may include:
determining a three-dimensional selected region 400 in the three-dimensional image, determining and displaying a two-dimensional selected region in a two-dimensional image corresponding to the three-dimensional image based on the three-dimensional selected region 400, generating target map data based at least on the size of the two-dimensional selected region, replacing the image data of the two-dimensional selected region in the two-dimensional image with the target map data, generating a mapped two-dimensional image 407, and converting the mapped two-dimensional image 407 into a mapped three-dimensional image 408.
In step S201, the platform may acquire an image, and acquire, according to a user operation, a selected area selected by a user on the image by using an input device, where the image may be loaded on a terminal device based on a user selection, or may be directly uploaded by the user, and a dimension of the selected area is the same as that of the image. The image is not limited to two dimensions or three dimensions, the image format is not limited, and the user can select the image according to the actual situation. Further, for convenience of understanding, in the following description, a three-dimensional image is taken as an example of the platform acquired image. The selected area can be a regular shape set by the platform or an irregular shape area selected by a user according to actual conditions. Further, the user may select the selected area by painting or dragging the input device to define.
As shown in step S202, since the dimension of the selected area is the same as the dimension of the image, the selected area is a three-dimensional image, and the direct mapping results in mismatching of the mapping position and the model and failure to attach to the correct position in consideration of the possible occurrence of irregular shape of the three-dimensional image. Therefore, the mapping representation of the three-dimensional image can be mapped, each vertex on the three-dimensional model is projected onto a two-dimensional plane, a two-dimensional selected area is generated, and the influence of the gesture and shape change can be reduced by projecting on the two-dimensional plane.
Further, as shown in step S203, target map data may be generated according to the image size of the two-dimensional selected region, wherein the target map data includes at least the size of the target map 405. In order to facilitate the subsequent generation of the target map 405, the size of the target map 405 may be greater than or equal to the image size of the two-dimensional selected area, which prevents the number of pixels from being affected during the subsequent amplification process due to the target map 405 being too small, reduces the resolution, and increases the difficulty of post-rendering.
Further, as shown in step S204, the target map data may further include image data, the two-dimensional selected region image data is replaced with image data of the target map 405, a mapped target image is generated, and the mapped target image is converted into a three-dimensional image.
Fig. 3 is a flowchart further illustrating a mapping method of a three-dimensional image according to an embodiment of the present disclosure, and fig. 4 is a schematic diagram illustrating an overall process of applying the mapping method of a three-dimensional image according to an embodiment of the present disclosure, as shown in fig. 3 and 4, the mapping method may further include:
determining and displaying a two-dimensional selected region in a two-dimensional image corresponding to the three-dimensional image based on the three-dimensional selected region 400, comprising: mapping the three-dimensional image into a two-dimensional image, acquiring an initial two-dimensional image 403 including a two-dimensional selected area corresponding to the three-dimensional selected area 400, generating a selected area display image 402 corresponding to the initial two-dimensional image 403, wherein the texture and/or color of the selected area display image 402 is different from the initial two-dimensional image 403, generating a mask image 404 of the same size as the initial two-dimensional image 403, wherein the area corresponding to the two-dimensional selected area is displayed in a first color, the area except the two-dimensional selected area is displayed in a second color, and displaying the two-dimensional selected area in the two-dimensional image corresponding to the three-dimensional image based on the initial two-dimensional image 403, the selected area display image 402 and the mask image 404, wherein the two-dimensional selected area is displayed in the texture and color of the selected area display image 402.
After the three-dimensional selected region 400 is acquired, the platform may map the three-dimensional image in order to acquire the two-dimensional selected region, as shown in steps S301 and S302. The user can select the patches within the range of the three-dimensional graph 401, wherein the patches can be triangular patches or quadrilateral patches, and the user can select the patches according to actual conditions without limitation. And according to the seam of the three-dimensional image, expanding the image to obtain a two-dimensional image.
Further, as shown in step S303, an initial two-dimensional image 403 is acquired from the three-dimensional selected region 400. In order to present the user-selected region in real-time, the platform may generate a selected region presentation image 402 corresponding to the initial two-dimensional image 403, as shown in step S304. The selected region presentation image 402 is presented by the user terminal. Wherein the texture and/or color of the selected region presentation image 402 is different from the initial two-dimensional image 403 for convenience of the user to distinguish the selected region presentation image 402 from the initial two-dimensional image 403.
As shown in step S305, a mask image 404 of the same size may be generated from the initial two-dimensional image 403. Wherein the mask image 404 may be used to hide or display portions of the image layer. The first color display may correspond to a region of the two-dimensional selected region, and the other regions may be displayed with a second color, where the first color is different from the second color for distinguishing different transparency. The specific colors of the first color and the second color are not limited herein, and the first color may be white or black, and may be selected by the user according to actual situations.
Fig. 4 is a schematic diagram illustrating a mask generating process of applying a mapping method of a three-dimensional image according to an embodiment of the present disclosure, and as shown in fig. 4, taking a user selecting a patch within a range of a three-dimensional graphic 401 as a triangle patch as an example, specific steps of generating a mask image 404 may include: traversing the triangle patch to obtain the triangle patch of the three-dimensional selected area 400. For the current triangle patch ABC, the triangle area A1B1C1 corresponding to the mask image 404 is searched, and coordinates of A1, B1 and C1 are obtained. And filling the area surrounded by the A1, the B1 and the C1 with a first color.
Further, as shown in step S306, in order to display the blending effect in real time, the initial two-dimensional image 403, the selected region presentation image 402, and the mask image 404 may be blended. The mask image 404 and the selected region presentation image 402 may be superimposed, where the superimposed presentation image is superimposed to the initial two-dimensional image 403.
Generating target map data based at least on the size of the two-dimensional selected region, comprising: a minimum bounding rectangle 406 of the two-dimensional selected region is determined, and target map data is generated having the same aspect ratio as the minimum bounding rectangle 406.
Further, as shown in step S307, the specific step of generating target map data may include: a minimum bounding rectangle of the first color region of the mask image 404 is calculated. The minimum bounding rectangle refers to the maximum range of a plurality of two-dimensional shapes (such as points, straight lines and polygons) expressed by two-dimensional coordinates, namely, the rectangle with the lower boundary defined by the maximum abscissa, the minimum abscissa, the maximum ordinate and the minimum ordinate of each vertex of the given two-dimensional shape.
Based on the user operation, the target map 405 is obtained, where the platform may obtain the target map 405 through a user upload or search instruction, or generate the target map 405 according to the requirement input by the user at the interface, and the specific form is freely selected by the user, which is not limited herein. Target map data having the same aspect ratio as the minimum bounding rectangle 406 is obtained from the target image based on the minimum bounding rectangle size information. The target image data is preferably greater than or equal to the minimum bounding rectangle, and if the target image data is too small, the user's margin for positional adjustment of the target image at a later stage is too small. And when the user adjusts the target image to be attached to the selected area image, the size of the target image needs to be changed, so that the pixels of the target image are changed, and the later rendering difficulty is increased.
Further, the mapping method of the three-dimensional image may further include:
replacing the image data of the two-dimensional selected area in the two-dimensional image with target map data to generate a mapped two-dimensional image 407, including: based on the map image corresponding to the target map data, the initial two-dimensional image 403, and the mask image 404, a mapped two-dimensional image 407 is generated.
The target map data is superimposed on the mask image 404 to generate a process two-dimensional image, and the process two-dimensional image is superimposed on the initial two-dimensional image 403 to generate a mapped two-dimensional image 407. As shown in step S308, a mapped three-dimensional image 408 is produced from the mapped two-dimensional image 407.
Further, the mapping method of the three-dimensional image may further include: determining a three-dimensional selected region 400 in a three-dimensional image includes: in response to a smearing operation on the three-dimensional image, a three-dimensional selected region 400 is determined.
Further, after the generation of the mapped three-dimensional image 408 is completed, the mapped three-dimensional image 408 may be modified according to a modification operation of the user. Wherein the modifying operation at least comprises a translation, scaling, rotation operation on the mapped three-dimensional image 408. The specific steps may include: resulting in the width, height of the modified mask image 404. The sample area of the post-modification target map 405 is calculated and the pixels of the initial two-dimensional image 403 are updated.
In the above, the mapping method of the three-dimensional image according to the embodiment of the present disclosure is described, and the mapping generation apparatus 600 for implementing the mapping method of the three-dimensional image will be further depicted below the patch within the range where the user can select the three-dimensional image 401.
The mapping processing device can be applied to a hardware environment formed by a server and a terminal device. The platform may be used to provide services to a terminal device or an application installed on a terminal device, which may be a video application, an instant messaging application, a browser application, an educational application, a gaming application, etc. The database may be provided on or separate from the server for providing data storage services for the server, such as a game data storage server, which may include, but is not limited to: a wired network, a wireless network, wherein the wired network comprises: local area networks, metropolitan area networks, and wide area networks, the wireless network comprising: bluetooth, WIFI and other networks for realizing wireless communication, wherein the user equipment mainly comprises terminal equipment, display equipment and input equipment, and the terminal equipment can be a terminal provided with an application program and can comprise at least one of the following components, but not limited to: the mobile phone (such as an Android mobile phone, an iOS mobile phone, etc.), a notebook computer, a tablet computer, a palm computer, an MID (Mobile Internet Devices, mobile internet device), a PAD, a desktop computer, an intelligent television, an intelligent voice interaction device, an intelligent household appliance, a vehicle-mounted terminal, an aircraft, etc., the input device can be a mouse, a VR handle, a scanner, a light pen, etc., the display device can be an LED display screen, and the server can be a single server, a server cluster formed by a plurality of servers, or a cloud server. The specific type of the above device is not limited herein, and the user can select the device according to the actual situation. The terminal device is connected with the display device and the input device, and the connection mode can comprise wired communication connection and wireless communication connection.
The map generation apparatus may include:
the three-dimensional region acquisition unit 601 determines a three-dimensional selected region 400 in a three-dimensional image, the two-dimensional region acquisition unit 602 determines and displays a two-dimensional selected region in a two-dimensional image corresponding to the three-dimensional image based on the three-dimensional selected region 400, the data generation unit 603 generates target map data based on at least the size of the two-dimensional selected region, the image generation unit 604 replaces the image data of the two-dimensional selected region in the two-dimensional image with the target map data, generates a mapped two-dimensional image 407, and converts the mapped two-dimensional image 407 into a mapped three-dimensional image 408.
Specifically, the map generating apparatus 500 may include a three-dimensional region acquiring unit 601, a two-dimensional region acquiring unit 602, a data generating unit 603, and an image generating unit 604.
The three-dimensional region acquiring unit 601 may be configured to acquire an image, and acquire a selected region selected on the image by a user using an input device according to a user operation, and send the three-dimensional selected region 400 to the two-dimensional region acquiring unit 602. The image can be loaded on the terminal equipment based on the selection of the user, or can be directly uploaded by the user, and the dimension of the selected area is the same as the dimension of the image. The image is not limited to two dimensions or three dimensions, the image format is not limited, and the user can select the image according to the actual situation. Further, for convenience of understanding, in the following description, a three-dimensional image is taken as an example of the platform acquired image. The selected area can be a regular shape set by the platform or an irregular shape area selected by a user according to actual conditions. Further, the user may select the selected area by painting or dragging the input device to define.
The two-dimensional region acquisition unit 602 may accept the three-dimensional selected region 400, generate a two-dimensional selected region from the three-dimensional selected region 400, and send the two-dimensional selected region to the data generation unit 603. Because the dimension of the selected area is the same as the dimension of the image, the selected area is a three-dimensional image, and the fact that the three-dimensional image is irregular in shape can be considered, the direct mapping results in mismatching of mapping positions and images, and the mapping positions and the images cannot be attached to the correct positions. Therefore, the mapping representation of the three-dimensional image can be mapped, each vertex on the three-dimensional image is projected onto a two-dimensional plane, a two-dimensional selected area is generated, and the influence of the gesture and the shape change can be reduced by projecting on the two-dimensional plane.
Further, the data generating unit 603 may acquire target map data from the two-dimensional selected region. From the image size of the two-dimensional selected region, target map data may be generated, wherein the target map data includes at least the size of the target map 405. In order to facilitate the subsequent generation of the target map 405, the size of the target map 405 may be greater than or equal to the image size of the two-dimensional selected area, which prevents the number of pixels from being affected during the subsequent amplification process due to the target map 405 being too small, reduces the resolution, and increases the rendering difficulty.
Further, upon receiving the target map data, the image generation unit 604 may replace the image data of the two-dimensional selected region in the two-dimensional image with the target map data, generate a mapped two-dimensional image 407, and convert the mapped two-dimensional image 407 into a mapped three-dimensional image 408. The target map data may also include image data, replace the two-dimensional selected region image data with image data of the target map 405, generate a mapped target image, and convert the mapped target image into a three-dimensional image.
The two-dimensional region acquisition unit 602 is further configured to: mapping the three-dimensional image into a two-dimensional image, acquiring an initial two-dimensional image 403 including a two-dimensional selected area corresponding to the three-dimensional selected area 400, generating a selected area display image 402 corresponding to the initial two-dimensional image 403, wherein the texture and/or color of the selected area display image 402 is different from the initial two-dimensional image 403, generating a mask image 404 of the same size as the initial two-dimensional image 403, wherein the area corresponding to the two-dimensional selected area is displayed in a first color, the area except the two-dimensional selected area is displayed in a second color, and displaying the two-dimensional selected area in the two-dimensional image corresponding to the three-dimensional image based on the initial two-dimensional image 403, the selected area display image 402 and the mask image 404, wherein the two-dimensional selected area is displayed in the texture and color of the selected area display image 402.
Determining and displaying a two-dimensional selected region in a two-dimensional image corresponding to the three-dimensional image based on the three-dimensional selected region 400, comprising: mapping the three-dimensional image into a two-dimensional image, acquiring an initial two-dimensional image 403 including a two-dimensional selected area corresponding to the three-dimensional selected area 400, generating a selected area display image 402 corresponding to the initial two-dimensional image 403, wherein the texture and/or color of the selected area display image 402 is different from the initial two-dimensional image 403, generating a mask image 404 of the same size as the initial two-dimensional image 403, wherein the area corresponding to the two-dimensional selected area is displayed in a first color, the area except the two-dimensional selected area is displayed in a second color, and displaying the two-dimensional selected area in the two-dimensional image corresponding to the three-dimensional image based on the initial two-dimensional image 403, the selected area display image 402 and the mask image 404, wherein the two-dimensional selected area is displayed in the texture and color of the selected area display image 402.
After acquiring the three-dimensional selected region 400, the platform may map the three-dimensional image in order to acquire the two-dimensional selected region. The user can select the patches within the range of the three-dimensional graph 401, wherein the patches can be triangular patches or quadrilateral patches, and the user can select the patches according to actual conditions without limitation. And according to the seam of the three-dimensional image, expanding the image to obtain a two-dimensional image.
Further, an initial two-dimensional image 403 is acquired from the three-dimensional selected region 400. In order to present the user-selected region in real-time, the platform may generate a selected region presentation image 402 corresponding to the initial two-dimensional image 403, as shown in step S304. The selected region presentation image 402 is presented by the user terminal. Wherein the texture and/or color of the selected region presentation image 402 is different from the initial two-dimensional image 403 for convenience of the user to distinguish the selected region presentation image 402 from the initial two-dimensional image 403.
Further, a mask image 404 of the same size may be generated from the initial two-dimensional image 403. Wherein the mask image 404 may be used to hide or display portions of the image layer. The first color display may correspond to a region of the two-dimensional selected region, and the other regions may be displayed with a second color, where the first color is different from the second color for distinguishing different transparency. The specific colors of the first color and the second color are not limited herein, and the first color may be white or black, and may be selected by the user according to actual situations.
Taking a user selecting a patch within the three-dimensional graphic 401 as a triangle patch as an example, the specific steps of generating the mask image 404 may include: traversing the triangle patch to obtain the triangle patch of the three-dimensional selected area 400. For the current triangle patch ABC, the triangle area A1B1C1 corresponding to the mask image 404 is searched, and coordinates of A1, B1 and C1 are obtained. And filling the area surrounded by the A1, the B1 and the C1 with a first color.
Further, to display the blending effect in real time, the initial two-dimensional image 403, the selected region presentation image 402, and the mask image 404 may be blended. The mask image 404 and the selected region presentation image 402 may be superimposed, where the superimposed presentation image is superimposed to the initial two-dimensional image 403.
The image generation unit is further configured to: and determining a minimum external rectangular frame of the two-dimensional selected area, and generating target map data with the same aspect ratio as the minimum external rectangular frame.
Further, based on the size of the two-dimensional selected region, the specific step of generating target map data may include: a minimum bounding rectangle of the first color region of the mask image 404 is calculated. The minimum bounding rectangle refers to the maximum range of a plurality of two-dimensional shapes (such as points, straight lines and polygons) expressed by two-dimensional coordinates, namely, the rectangle with the lower boundary defined by the maximum abscissa, the minimum abscissa, the maximum ordinate and the minimum ordinate of each vertex of the given two-dimensional shape.
Based on the user operation, the target map 405 is obtained, where the platform may obtain the target map 405 through a user upload or search instruction, or generate the target map 405 according to the requirement input by the user at the interface, and the specific form is freely selected by the user, which is not limited herein. Target map data having the same aspect ratio as the minimum bounding rectangle 406 is obtained from the target image based on the minimum bounding rectangle size information. The target image data is preferably greater than or equal to the minimum bounding rectangle, and if the target image data is too small, the user's margin for positional adjustment of the target image at a later stage is too small. And when the user adjusts the target image to be attached to the selected area image, the size of the target image needs to be changed, so that the pixels of the target image are changed, and the later rendering difficulty is increased.
The data generation unit is further configured to: based on the map image corresponding to the target map data, the initial two-dimensional image 403, and the mask image 404, a mapped two-dimensional image 407 is generated.
Specifically, the target map data is superimposed with the mask image 404 to generate a process two-dimensional image, and the process two-dimensional image is superimposed with the initial two-dimensional image 403 to generate a mapped two-dimensional image 407.
The mapping device further includes: and an area transmitting unit determining the three-dimensional selected area 400 in response to a smearing operation on the three-dimensional image.
Fig. 7 is a hardware block diagram illustrating an electronic device 700 according to an embodiment of the disclosure. An electronic device according to an embodiment of the present disclosure includes at least a processor, and a memory for storing computer readable instructions. When loaded and executed by a processor, the processor performs the mapping method of three-dimensional images in virtual space 101 as described above.
The electronic device 700 shown in fig. 6 specifically includes: a Central Processing Unit (CPU) 701, a Graphics Processing Unit (GPU) 702, and a main memory 703. These units are interconnected by a bus 704. A Central Processing Unit (CPU) 701 and/or a Graphics Processing Unit (GPU) 702 may be used as the above-described processor, and a main memory 703 may be used as the above-described memory storing computer-readable instructions. In addition, the electronic device 700 may further comprise a communication unit 705, a storage unit 706, an output unit 707, an input unit 708 and an external device 709, which are also connected to the bus 704.
Fig. 8 is a schematic diagram illustrating a computer-readable storage medium according to an embodiment of the present disclosure. As shown in fig. 8, a computer-readable storage medium 800 according to an embodiment of the present disclosure has computer-readable instructions 801 stored thereon. When the computer readable instructions 801 are executed by the processor, the method of generating in the virtual space 101 according to the embodiments of the present disclosure described with reference to the above figures is performed. Computer-readable storage media include, but are not limited to, volatile memory and/or nonvolatile memory, for example. Volatile memory can include, for example, random Access Memory (RAM) and/or cache memory (cache) and the like. The non-volatile memory may include, for example, read Only Memory (ROM), hard disk, flash memory, optical disk, magnetic disk, and the like.
In the above, the mapping method and apparatus, the storage medium, and the electronic device for three-dimensional image according to the embodiments of the present disclosure have been described with reference to the accompanying drawings, where the target mapping 405 may be replaced according to the user selection area, and the user may place the target mapping 405 at the position where the user wants to place through the free selection area, so that the mapping type of the three-dimensional image 401 is enriched, the operability is stronger, the freedom of mapping is increased, and the user experience in the mapping process is effectively improved.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
The basic principles of the present disclosure have been described above in connection with specific embodiments, however, it should be noted that the advantages, benefits, effects, etc. mentioned in the present disclosure are merely examples and not limiting, and these advantages, benefits, effects, etc. are not to be considered as necessarily possessed by the various embodiments of the present disclosure. Furthermore, the specific details disclosed herein are for purposes of illustration and understanding only, and are not intended to be limiting, since the disclosure is not necessarily limited to practice with the specific details described.
The block diagrams of the devices, apparatuses, devices, systems referred to in this disclosure are only illustrative examples and schematic diagrams require or imply that the connections, arrangements, configurations must be made in the manner shown in the block diagrams. As will be appreciated by one of skill in the art, the devices, apparatuses, devices, systems may be connected, arranged, configured in any manner. Words such as "including," "comprising," "having," and the like are words of openness and mean "including but not limited to," and are used interchangeably therewith. The terms "or" and "as used herein refer to and are used interchangeably with the term" and/or "unless the context clearly indicates otherwise. The term "such as" as used herein refers to, and is used interchangeably with, the phrase "such as, but not limited to.
In addition, as used herein, the use of "or" in the recitation of items beginning with "at least one" indicates a separate recitation, such that recitation of "at least one of A, B or C" for example means a or B or C, or AB or AC or BC, or ABC (i.e., a and B and C). Furthermore, the term "exemplary" does not mean that the described example is preferred or better than other examples.
It is also noted that in the systems and methods of the present disclosure, components or steps may be disassembled and/or assembled. Such decomposition and/or recombination should be considered equivalent to the present disclosure.
Various changes, substitutions, and alterations are possible to the techniques herein without departing from the teachings as defined by the appended claims. Furthermore, the scope of the claims is not limited to the exact aspects of the process, machine, manufacture, composition of matter, means, methods and acts described above. The processes, machines, manufacture, compositions of matter, means, methods, or acts, presently existing or later to be developed that perform substantially the same function or achieve substantially the same result as the corresponding aspects herein may be utilized. Accordingly, the appended claims are intended to include within their scope such processes, machines, manufacture, compositions of matter, means, methods, or acts.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present disclosure. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the disclosure. Thus, the present disclosure is to be limited in scope by the aspects shown herein but in the broadest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Further, this descriptive schematic diagram limits embodiments of the present disclosure to the forms disclosed herein. Although a number of example aspects and embodiments have been discussed above, a person of ordinary skill in the art will recognize certain variations, modifications, alterations, additions, and subcombinations thereof.