Detailed Description
The technical solution in the embodiments of the present invention will be described below with reference to the accompanying drawings. The particular methods of operation in the method embodiments may also be applied to apparatus embodiments or system embodiments. In the description of the present invention, the term "plurality" means two or more unless otherwise specified.
As shown in fig. 1, a system 100 for fast snap shooting provided by an embodiment of the present invention includes a first camera and a second camera, where the first camera is a stereo vision camera 110, and the second camera is a PTZ (Pan/Tilt/Zoom) camera 120.
The stereoscopic vision camera 110 in the system is provided with two lenses, so that the principle of human vision can be simulated, two images of the current scene can be simultaneously obtained from two different angles, the actual distance and angle between a target in the images and the stereoscopic vision camera can be calculated according to the matching relation of pixels between the images by utilizing the difference of the shooting angles of the two images, and the coordinates of the target in the coordinate system of the stereoscopic vision camera 110 can be further calculated. The stereoscopic vision camera 110 in the embodiment of the present invention refers to a camera or a camera group that can be used to simultaneously acquire images of a current scene from different lenses, and this name is not limited to the device itself, and may be other names, such as: a binocular stereo vision camera, a binocular range finding camera, or a stereo vision range finding camera, etc. In practice, two ordinary video cameras can be used to form a set of stereoscopic cameras, or two ordinary camera lenses can be integrated on one device to form a stereoscopic camera.
The fast snapshot system 100 in the embodiment of the present invention may further include multiple sets of stereoscopic cameras to expand the monitoring range. The technical solution of the present invention is described below only in the case of one set of stereoscopic cameras, and the technical solution of one set of stereoscopic cameras can be referred to in the case of multiple sets of stereoscopic cameras.
The PTZ camera 120 in the fast capturing system 100 in the embodiment of the present invention is equipped with a pan-tilt, which can realize omnidirectional (left-right/up-down) movement and zoom control of a lens, and is used to obtain shooting parameters of the PTZ camera 120 capturing the target, including a shooting focal length and a shooting angle, according to the distance and angle between the target and the stereoscopic vision camera, and adjust an image of the target captured after the PTZ camera 120 according to the obtained parameters.
When the fast snap-shot system 100 works, the stereoscopic vision camera 110 simultaneously obtains current monitoring pictures from the two lenses, where the current monitoring pictures include a first picture and a second picture, and the first picture and the second picture are pictures shot by the first lens and the second lens of the stereoscopic vision camera 110 at the same time. The stereoscopic vision camera 110 detects an object in the current screen and calculates the distance and angle of the object from the stereoscopic vision camera 110 after detecting the object. If multiple objects are detected, the distance and angle between each object and the stereo vision camera 110 are calculated. The stereoscopic camera 110 then sends the distance and angle of the object from the stereoscopic camera 110 to the PTZ camera 120. After receiving the distance and angle sent by the stereoscopic vision camera 110, the PTZ camera 120 calculates shooting parameters for the PTZ camera 120 to shoot the target according to the distance and angle between the target and the stereoscopic vision camera 110. After obtaining the shooting parameters, the PTZ camera 120 adjusts the angle of the PTZ camera 120 and the focal length of the lens of the PTZ camera 120 according to the shooting parameters. After the adjustment is completed, PTZ camera 120 takes an image of the target.
In the event that a more accurate capture of an image of a target is desired, the stereo vision camera 110 may also calculate a corresponding distance estimation error for the target and send to the PTZ camera 120, and the PTZ camera 120 may also calculate a depth of field for the target corresponding to the object distance of the PTZ camera 120. The PTZ camera 120 compares the distance estimation error of the target with the depth of field at the object distance before adjustment, and if the depth of field is greater than or equal to the distance estimation error, the PTZ camera 120 is directly adjusted according to the shooting parameters and the target is captured; if the depth of field is less than the distance estimation error, the PTZ camera 120 is adjusted according to the shooting parameters and then the PTZ camera 120 is continuously controlled to automatically focus.
In some cases, it is possible to detect multiple targets in the current monitoring screen of the stereoscopic vision camera 110, the stereoscopic vision camera 110 calculates the distance and angle between each target and the stereoscopic vision camera 110, the PTZ camera 120 obtains the shooting parameters of the PTZ camera 120 corresponding to each target according to the distance and angle between each target and the stereoscopic vision camera 110, determines the capturing order of the multiple targets according to the priority, adjusts the PTZ camera 120 to capture the images of the multiple targets according to the determined capturing order, and determines the priority according to the parameters including the position, angle, distance, and moving track of the target or whether the target is about to leave the monitoring area.
The fast snap-shot system in the above embodiment is to separately set the stereoscopic vision camera 110 and the PTZ camera 120, and the two independently perform the corresponding operation and control functions. The stereoscopic vision camera 110 and the PTZ camera 120 may be connected in a wired or wireless manner to enable information interaction between the two devices. It should be noted that the calculation operations in the stereoscopic vision camera 110, such as detecting the object and calculating the distance and angle between the object and the stereoscopic vision camera 110, can also be performed by the PTZ camera 120, and those skilled in the art can make the above-mentioned modifications to the fast snapshot system in the present embodiment, and such modifications should be considered to be within the scope of the claims of the present invention and the equivalent technology thereof.
Fig. 2 is a schematic diagram illustrating a possible structure of a fast capturing apparatus provided in an embodiment of the present invention, where the apparatus includes: a first camera 210, a processor 220, a memory 230, a second camera 240, and at least one communication bus 250, wherein the first camera 210 may be a stereoscopic vision camera 210 and the second camera 240 may be a PTZ camera 240.
Processor 220 may be a general purpose central processing unit CPU, microprocessor, ASIC, or one or more integrated circuits for controlling the execution of programs in accordance with aspects of the present invention. The processor 220 may also be implemented using an FPGA or a DSP.
A memory 230, which may be a volatile memory (volatile memory), such as a random-access memory (RAM); or a non-volatile memory (non-volatile memory), Hard Disk Drive (HDD) or solid-state drive (SDD); or a combination of the above types of memories and provides instructions and data to the processor.
The bus 250 may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The bus 250 may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 2, but it is not intended that there be only one bus or one type of bus.
The stereoscopic camera 210 includes two lenses, and is configured to obtain a current monitoring frame from the two lenses at the same time, where the current monitoring frame includes a first frame and a second frame, and the first frame and the second frame are respectively frames captured by the two lenses of the stereoscopic camera 210 at the same time.
The processor 220 detects a target in the current monitoring picture of the stereoscopic vision camera 210, calculates the distance and angle between the target and the stereoscopic vision camera 210, and calculates shooting parameters corresponding to the PTZ camera 240 according to the distance and angle between the target and the stereoscopic vision camera 210, wherein the shooting parameters comprise a shooting angle and a lens focal length. After obtaining the shooting parameters required by PTZ camera 240 to capture the target, processor 220 generates corresponding control signals according to the shooting parameters, and controls PTZ camera 240 to adjust the shooting angle and the focal length of the lens.
PTZ camera 240 may rotate up and down, left and right, and may rotate to a corresponding angle according to a photographing angle in the photographing parameters, a focal length of PTZ camera 240 may be variable, and processor 220 may control PTZ camera 240 to adjust a focal length of a lens and capture an image of a target. PTZ camera 240 is also capable of auto-focusing, which may be accomplished in accordance with auto-focusing commands sent by processor 220.
In a possible implementation manner, the processor 220 is further configured to calculate a corresponding distance estimation error according to a distance between the target and the stereoscopic vision camera 210, calculate a depth of field corresponding to the object distance of the PTZ camera 240 according to an object distance between the target and the PTZ camera 240 and a focal length of a lens, compare the distance estimation error with a size relationship of the depth of field, and if the distance estimation error is less than or equal to the depth of field, the processor 220 controls the PTZ camera 240 to adjust and capture an image of the target according to the shooting parameters; if the distance estimation error is larger than the depth of field, the processor 220 controls the capturing camera 240 to adjust according to the shooting parameters, and then controls the PTZ camera 240 to automatically adjust the focal length of the lens.
In some possible cases, if the processor 220 detects multiple targets, determining a snapshot order of the multiple targets according to priority, and controlling the PTZ camera 240 to snapshot images of the multiple targets according to the determined snapshot order, where parameters according to which the priority is determined include an orientation, an angle, a distance, a trajectory, or whether the multiple targets are about to leave a monitored area.
Optionally, the fast-snap apparatus 200 may further include a communication interface 260, and the communication interface 260 is used to transmit the image of the snap-shot target to an external apparatus.
The fast capture apparatus 200 in the above embodiment is an apparatus that integrates the stereoscopic vision camera 210, the processor 220, and the PTZ camera 240, three components of the fast capture apparatus 200 may be separately provided and connected in a wired or wireless manner to realize communication between the components, or the three components may be combined, and one device may be used to realize the functions of the processor 220 and the memory 230. The stereoscopic vision camera 210 and the PTZ camera 240 in the fast-capture apparatus 200 are mainly used to acquire a target image, and the processor 220 is used to perform calculation and control operations. It should be noted that, according to actual needs, part of the calculation or control functions of the processor 220 may be implemented by the stereo vision camera 210 or the PTZ camera 240, and those skilled in the art may make the above-mentioned modifications to the fast snapshot system in the present embodiment, and such modifications should be considered to be within the scope of the claims of the present invention and the equivalent technology thereof.
As shown in fig. 3, the configuration of the network camera 300 is common to the first camera and the second camera in the fast capture system 100 and the fast capture apparatus 200. The network camera 300 includes the structure common to the first camera and the first camera in the above-described embodiments, and for ease of understanding, standard features of the network camera 300 not relevant to the present invention will not be described again. The network camera 300 includes a lens 310 as a front end part of the network camera 300, and the lens 310 has a fixed aperture, an auto zoom, and the like; an image sensor 320, such as a Complementary Metal Oxide Semiconductor (CMOS), a Charge-coupled Device (CCD), or the like, for recording incident light; an image processor 330; a processor 340 for performing computational operations and controlling the camera; a memory 350 for storing programs or data; a communication bus 360 for communicating information between the various components, and a communication interface 370 for communicating information over a communication network to other nodes connected to the network.
The image sensor 320 receives information about the recorded light and processes this information by means of an a/D converter and a signal processor 331, wherein the a/D converter and the signal processor 331 are well known to the skilled person. In some embodiments, such as when the image sensor 320 is a CMOS sensor, the image sensor 320 includes an a/D converter, and thus no a/D converter is required in the image processor 330. The result produced by the a/D converter and signal processor 331 is digital image data, which according to one embodiment is processed in a scaling unit 332 and an image encoder 333 before being sent to a processor 340. The scaling unit 332 is used to process the digital image data into at least one image of a specific size. However, the scaling unit 332 may be arranged to generate a plurality of differently sized images, all representing the same image/frame provided by the a/D converter and the signal processor 331. According to another embodiment, the function of the scaling unit 332 is performed by the image encoder 333, and in yet another embodiment, there is no need to perform any scaling or resizing on the image from the image sensor 320.
The encoder 333 is optional for carrying out the invention and is arranged to encode the digital image data into any of a number of known formats for a continuous video sequence, for a limited video sequence, for a still image or for an image/video stream. For example, the image information may be encoded as MPEG1, MPEG2, MPEG4, JEPG, mjpeg, bitmap, or the like. The processor 340 may use the unencoded image as input data. In this case, the image data is transferred from the signal processor 331 or from the scaling unit 332 to the processor 340 without passing the image data through the image encoder 333. The uncoded image may be in any uncoded image format, such as BMP, PNG, PPM, PGM, PNM, and PBM, although the processor 340 may also use the encoded data as input data.
In one embodiment of the present invention, the image data may be directly transmitted from the signal processor 331 to the processor 340 without passing through the scaling unit 332 or the image encoder 333. In yet another embodiment, the image data may be sent from the scaling unit 332 to the processor 340 without passing through the image encoder 333.
Processor 340 may be a general-purpose Central Processing Unit (CPU), a microprocessor, an application-specific integrated circuit (ASIC), or one or more ics for controlling the execution of programs in accordance with the present invention. The processor 340 may also be implemented using a Field Programmable Gate Array (FPGA) or a DSP. When DSP-based software code compression is employed, some of the functions in the image processor 330 may also be integrated on the processor 340. The processor 340 is used to manage and control the network camera 300.
Memory 350 is used to store application program code for performing aspects of the present invention and may be a read-only memory (ROM) or other type of static storage device that can store static information and instructions, a Random Access Memory (RAM) or other type of dynamic storage device that can store information and instructions, an electrically erasable programmable read-only memory (EEPROM), a compact disc read-only memory (CD-ROM) or other optical disk storage, optical disk storage (including compact disc, laser disc, optical disc, digital versatile disc, blu-ray disc, etc.), magnetic disk storage media or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited thereto. The memory 350 may be self-contained and coupled to the processor 340 via a bus 360. Memory 350 may also be integrated with processor 340.
Communication bus 360 may include a path that transfers information between components.
Communication interface 370 may use any transceiver or the like for communicating with other devices or communication networks, such as ethernet, Radio Access Network (RAN), Wireless Local Area Networks (WLAN), etc.
The stereoscopic vision camera in the above embodiment is provided with a plurality of lenses on the basis of the general-purpose video camera 300 to realize that the currently detected picture is obtained from the plurality of lenses; the PTZ camera in the above embodiment is provided with the pan/tilt unit on the basis of the general-purpose video camera 300, so as to realize omnidirectional (left/right/up/down) movement and adjust the shooting angle of the PTZ camera.
The fast snapshot system and method provided by the embodiments of the present invention will be further described with reference to the accompanying drawings.
As shown in fig. 4, in the method for fast capturing provided by the embodiment of the present invention, when a target is captured, a stereoscopic vision camera is used to assist the capturing camera in focusing, so that the problem of too long focusing time in the existing capturing technology is solved. The fast capturing method provided by the embodiment of the present invention can be applied to the fast capturing system 100 in fig. 1 and the fast capturing apparatus 200 in fig. 2, and is used for capturing a fast moving target in a large scene, and the following describes a specific implementation of the method provided by the embodiment of the present invention with reference to fig. 1. The quick snapshot method provided by the embodiment of the invention comprises the following steps:
and 410, detecting a target in the current picture of the first camera, and calculating the distance and the angle between the target and the first camera.
The first camera may be the stereoscopic vision camera 110, and the monitored current frames include a first frame and a second frame, where the first frame and the second frame are respectively frames captured by the first lens and the second lens of the stereoscopic vision camera 110 at the same time. It is possible to detect an object in a frame and calculate the distance and angle between the object and the stereoscopic camera 110 according to the visual difference of the object in different frames.
And 420, calculating shooting parameters of the second camera according to the distance and the angle of the target from the first camera.
In the fast snap-shot system, the second camera is PTZ camera 120. After the distance and angle between the target and the stereoscopic vision camera 110 are obtained, the object distance and angle between the target and the PTZ camera 120 can be obtained according to the position relationship between the stereoscopic vision camera 110 and the PTZ camera 120, and then the shooting parameters of the PTZ camera 120 for shooting the target are obtained, wherein the shooting parameters comprise a shooting angle and a lens focal length.
And 430, adjusting the shooting angle and the lens focal length of the second camera according to the shooting parameters and then capturing the target.
The shooting angle and the lens focal length are two most important parameters for the PTZ camera 120 to capture the target, the shooting angle and the lens focal length of the PTZ camera 120 are obtained according to the above steps, the pan-tilt of the PTZ camera 120 can be controlled to rotate to a corresponding angle, the lens of the PTZ camera 120 is controlled to zoom, the lens focal length is adjusted to a corresponding value, and the target is captured.
Through the steps, the PTZ camera 120 can complete the snapshot of the target without automatic focusing, so that the long-time automatic focusing process is avoided, the snapshot efficiency is improved, the snapshot is more timely, and the image is clearer.
The specific implementation method of the above steps is further described below with reference to the accompanying drawings.
As shown in fig. 5, a method of calculating a distance between a target and the stereoscopic vision camera 110 is provided for the embodiment of the present invention. Wherein Ol, Or are target surface central point that the left and right cameras of stereoscopic vision camera 110 put respectively, and target point P is at the target surface formation of image of left and right cameras respectively to Pl, Pr point, and the light path is from the target point through the center of lens plane to the target surface formation of image point, according to similar triangle-shaped principle, can obtain:
(B+(xl-xr))/D=B/(Z-f)
after simplification, the following is obtained:
D=(f×(B+xl-xr))/(xl-xr)
wherein, B is a baseline distance, namely the distance between the left camera and the right camera; f is the focal length of the stereoscopic vision camera; xl is the horizontal distance between the imaging point of the target on the left side and the center point of the camera on the left side; xr is the horizontal distance between the imaging point of the target at the right camera and the center point of the right camera; d is the distance of the object from the stereoscopic camera.
The angle relation between the target and the left camera, the angle relation between the target and the right camera and the angle between the target and the central line of the stereoscopic vision camera can be calculated according to a trigonometric function calculation formula on the basis of target distance calculation.
However, the distance measured by the stereo distance measurement technology is not absolutely accurate, the measured distance has a certain error, the error is a distance estimation error, and the error range is related to the lens parameters of the stereo vision camera, the pixel size of the photoreceptor and the distance between the two lenses. The distance estimation error increases with the distance of the target, in direct proportion to the square of the distance. Thomas Luhmann, Close-Range photography and 3D Imaging (2014) has provided a method for calculating distance estimation errors. In addition, the coefficient between the distance estimation error and the square of the distance can be obtained through multiple times of measurement, and the relation between the measured distance of the target and the distance estimation error can be further obtained.
There are many calculation methods for measuring the target distance by using the stereo distance measurement technology, and the above calculation method is only one calculation method adopted in one embodiment of the present invention, and does not limit the protection scope of the present invention.
The object distance of the object from the PTZ camera 120 and the photographing parameters of the PTZ camera 120 in step 420 may be obtained in various ways after the distance and angle of the object from the stereoscopic vision camera 110 are obtained.
In one implementation, the distance between the stereoscopic vision camera 110 and the target may be directly set as the object distance between the target and the PTZ camera 120, the angle between the stereoscopic vision camera 110 and the target may be set as the shooting angle of the PTZ camera 120, and the shooting focal length of the PTZ camera 120 may be obtained according to the object distance between the target and the PTZ camera 120. Since the fast capturing system 100 in the embodiment of the present invention is generally used to capture a long-distance object, the distance between the object and the fast capturing system 100 is much larger than the distance between the lens of the stereoscopic vision camera 110 and the lens of the PTZ camera 120, and under the condition of low requirement on the capturing precision, the distance and the angle between the object and the stereoscopic vision camera 110 can be considered to be equal to the object distance and the angle between the object and the PTZ camera 120. In addition, the stereoscopic camera 110 generally includes two lenses, and the coordinate system is usually set up with the center positions of the two lenses as the origin, and if the PTZ camera 120 is located at the center positions of the two lenses of the stereoscopic camera 110, the coordinate system of the stereoscopic camera 110 coincides with the coordinate system of the PTZ camera 120, and the distance and angle between the object and the stereoscopic camera 110 are equal to the object distance and angle between the object and the PTZ camera 120.
In another implementation, the spatial position difference between the stereoscopic vision camera 110 and the PTZ camera 120 is considered, the object distance and the angle between the object and the PTZ camera 120 are calculated according to the spatial position difference and the distance and the angle between the object and the stereoscopic vision camera 110, the angle is set as the shooting angle of the PTZ camera 120, and the shooting focal length of the PTZ camera 120 is obtained according to the object distance between the object and the PTZ camera 120.
For a spatial position difference between the stereoscopic vision camera 110 and the PTZ camera 120, the coordinates of the object in the PTZ camera 120 can be obtained by three-dimensional coordinate conversion.
As shown in fig. 6, a manner for implementing coordinate transformation between the stereoscopic vision camera and the PTZ camera is provided for the embodiment of the present invention, where a coordinate O is a coordinate origin of a coordinate system of the PTZ camera, and a coordinate O' is a coordinate origin of a coordinate system of the stereoscopic vision camera, and a relationship between points in the two coordinate systems is shown as follows:
wherein λ is a scale proportion factor between two coordinate systems, Δ X, Δ Y, and Δ Z are position differences between an origin of coordinates of a PTZ camera coordinate system and an origin of coordinates of a stereoscopic vision camera coordinate system, and R is a rotation matrix during coordinate conversion, and is used for rotating each coordinate axis of the coordinate system of the stereoscopic vision camera to a coordinate axis corresponding to the coordinate system of the PTZ camera.
R=R(εY)R(εYX)R(εYZ)
The expansion is as follows:
from the above equation, the coordinates of the target in the coordinate system of PTZ camera 120, and thus the object distance of the target and the angle to PTZ camera 120, can be obtained.
After the object distance of the target from PTZ camera 120 is obtained, the focal length of the lens of PTZ camera 120 may be obtained in a variety of ways. Specifically, as an implementation manner, after the object distance of the target is calculated by storing a comparison table of the object distance and the lens focal length in advance, the comparison table is queried to obtain the shooting focal length of the PTZ camera 120.
Further, a lookup table of the distance and angle between the object and the stereoscopic vision camera 110 and the photographing parameters of the PTZ camera 120 may be stored in advance, and the pre-stored lookup table may be queried according to the distance and angle between the object and the stereoscopic vision camera 110 to obtain the corresponding photographing parameters.
Optionally, a depth of field of PTZ camera 120 at the target may also be calculated prior to adjusting PTZ camera 120 in step 430, and the magnitude of the depth of field versus distance estimation error may be compared to determine how to adjust PTZ camera 120.
Fig. 7 is a schematic diagram illustrating a method for calculating a depth of field according to an embodiment of the present invention.
Wherein: δ is the allowable circle diameter of confusion, F is the focal length of the lens, F is the shooting aperture value of the lens, L is the focusing lens, Δ L1 is the foreground depth, Δ L2 is the back field depth, Δ L is the field depth, and the calculation formula of the field depth is as follows:
under a certain application scene, the aperture F and the diameter delta of the circle of confusion of the camera are both determined values, and the formula shows that the depth of field is infinite after the object distance is small to a certain value, and the depth of field is reduced along with the increase of the object distance.
Comparing the depth of field of the PTZ camera 120 at the target with the distance estimation error of the target, wherein if the depth of field is greater than or equal to the distance estimation error, the depth of field can cover the distance estimation error range of the target, even if a certain error exists in distance calculation, the depth of field is also within the depth of field range, clear images can be shot, and the PTZ camera 120 can be directly adjusted according to the shooting parameters and the target can be captured; if the depth of field is smaller than the distance estimation error, the depth of field may not completely cover the distance estimation error range of the target, and a clear image of the target may not be captured, so that the PTZ camera 120 may need to perform auxiliary auto-focus to capture the target after being adjusted according to the above-mentioned capturing parameters.
In a specific implementation, as an embodiment, if a plurality of targets are detected in a monitoring screen of the stereoscopic vision camera 110, the distance and the angle between each target and the stereoscopic vision camera 110 are respectively calculated, shooting parameters of the PTZ camera 120 corresponding to each target are respectively obtained according to the distance and the angle between each target and the stereoscopic vision camera 110, a snapshot sequence of the plurality of targets is determined according to a priority, images of the plurality of targets are respectively captured by the PTZ camera 120 according to the determined snapshot sequence, and parameters according to which the priority is determined include a position, an angle, a distance, a running track where the target is located, whether the target is about to leave a monitoring area, and the like. The above list is non-exhaustive and may include other parameters. It should be clear that the above parameters are not all necessary, and several parameters can be selected as the consideration factors of the priority according to the actual application scenario.
The above method embodiment is described with reference to the fast capturing system 100 in fig. 1, and the specific steps of implementing the above method embodiment in the fast capturing apparatus 200 in fig. 2 are similar to those described above, and it should be clearly understood by those skilled in the art that the detailed description is omitted here.
Embodiments of the present invention further provide a computer-readable storage medium for storing computer software instructions for the above fast capture apparatus and/or system, which includes program codes designed to execute the above method embodiments. By executing the stored program codes, the distance and the angle of the target can be quickly measured through a stereoscopic vision technology, so that shooting parameters of a snapshot image are obtained, the PTZ camera is quickly adjusted, the snapshot of the target is completed, the snapshot time can be shortened, and the real-time performance of the snapshot is guaranteed.
The embodiment of the invention also provides a computer program product. The computer program product comprises computer software instructions which can be loaded by a processor for implementing the method in the above-described method embodiments.
While the invention has been described in connection with various embodiments, other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed invention, from a review of the drawings, the disclosure, and the appended claims. In the claims, the word "comprising" does not exclude other elements or steps, and the word "a" or "an" does not exclude a plurality. A single processor or other unit may fulfill the functions of several items recited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, apparatus (system), or computer program product. Accordingly, this application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects that may all generally be referred to herein as a "module" or "system. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein. A computer program stored/distributed on a suitable medium supplied together with or as part of other hardware, may also take other distributed forms, such as via the Internet or other wired or wireless telecommunication systems.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (devices) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While the invention has been described in conjunction with specific features and embodiments thereof, it will be evident that various modifications and combinations can be made thereto without departing from the spirit and scope of the invention. Accordingly, the specification and figures are merely exemplary of the invention as defined in the appended claims and are intended to cover any and all modifications, variations, combinations, or equivalents within the scope of the invention. It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.