System and Method for Image Registration
The invention relates to a system and method for image registration. More specifically but not exclusively it relates to a system and method for image registration incorporating the introduction of artificial common registration control point data into a sensor field of view (FoV) which can then be shared by disparate sensors in order to provide common field of reference for the entire system.
In current known systems of image registration solely passive image registration methods are used, including but not limited to:
1. Intensity Based Image Registration
Intensity-based methods compare intensity patterns in images via correlation metrics. Intensity-based methods register entire images or sub images by comparing their respective intensity profiles by a given metric (e.g. Sum of Absolute Difference). If they are registered, the metric is at minima and the centre of each corresponding sub image is treated as a corresponding feature point.
2. Feature Based Image Registration
Feature-based methods find correspondence between image features such as points, lines, and contours. Feature-based method established correspondence between numbers of points in images. Knowing the correspondence between those points, a transform is then determined to map the target image to the reference images.
3. Frequency Based Image Registration
These methods use metrics in the frequency domain to compare respective images, and include methods such as phase correlation.
4. Interactive Image Registration
This is a manual method of image registration, which requires the user to identify a series of correlated points in each input image which are then used to calculate a transform to bring all of the input images onto the same coordinate set. This can also be done by placing image registration markers into the scene for later identification for use in Feature Based Methods (see 2).
5. Calibrated Field of View Image Registration
This uses co-bore sighted sensors, often on the same platform (and even on the same optics or even sensor array), with known offsets in parallax and field of view. These are then accounted for in the calculation of the required image transform.
Passive methods rely upon either tightly controlled interrelated physical aspects (calibrated fields of view for example) or shared features between disparate sensors (landmarks for example).
Both of these become increasingly difficult in real world applications due to uncontrolled movement (vibration affecting common physical calibration for example), difficulty in common feature detection, and lack of shared features (especially in disparate band sensors - it would be very difficult to align a Terrahertz imager with an Infra Red using Feature Based methods for example, even using multi-modal registration algorithms).
According to the invention there is provided a system for improving image data extraction from a target image comprising a plurality of sensors having a common field of view in which at least one sensor maps a series of control data points on to the target, the remaining sensor or sensors using the control data points to calculate a transform with which to map image data output by the first sensor to image data output by the remaining sensor or sensors to allow greater information to be gained about the target area.
According to a further aspect of the invention there is provided a method of improving image registration in a system having a plurality of sensors comprising the steps of superimposing a series of data control points on to a target area, extracting image data relating to the target using the sensors, and transforming the image data received from the sensors relating to the target area using the data control points superimposed on the target thereby improving the accuracy and detail of the image data received.
In this way, the introduction of artificial common registration control point data in to a sensor field of view which can then be shared by disparate sensors provides a common field of reference for the entire system. Phase sensitive arrays can also be used in order to allow orientation information to be passed by uniquely identifying each data point in each dataset. This provides a given number of registration markers, which can then be automatically identified and used to calculate the necessary transforms without the need for user intervention.
The invention will now be described with reference to the accompanying diagrammatic drawings in which:
Figure 1 shows a schematic drawing of the system of one form of the invention; and
Figure 2 shows a schematic drawing of another form of the invention showing
Figure 1 shows a first embodiment of the invention. In this embodiment an Unmanned Aerial Vehicle (UAV) with an integrated sensor and multiple laser designators gives a plan view of the target area and applies registration control points to the target area, which are then observed by the ground based vehicle. The ground based vehicle can then use the registration control points to calculate a transform with which to map the UAV image output to its own image output
thereby allowing greater information to be gained about the target area, including visibility of occluded areas (e.g. directly behind a tower or walls).
It will be appreciated that both platforms described above could be static or on the move to allow greater flexibility. These techniques can also employ unambiguous registration control patterns to ensure target area orientation is also visible to all sensors - this is particularly useful for airborne sensor platforms.
In a second embodiment of the invention shown in Figure 2, an Unmanned Aerial Vehicle (UAV) with integrated laser designators applies a registration control pattern to the target area, which is then observed by the ground based multi- sensor platform. The system can then use the registration control points to calculate a transform with which to map all of the sensors to one another.
It will be appreciated that the laser designators could equally be co-located on the multi-sensor ground platform provided they are sufficiently visible by all of the relevant sensors.
Furthermore, the sensors can be co-located or on disparate platforms provided they have sufficient visibility of the registration control patterns.
This technique could be applied to provide extraction of 3 dimensional information about a target scene by allowing more accurate calculation of the system inter-sensor parallax, or the resolution of sub-pixel scene elements by the more accurate correlation of multiple sensors.
Furthermore, the technique could be applied to allow more accurate registration of multi-band sensors for image fusion applications (e.g. visual and thermal band cameras - the registration control patterns will be selected to be visible by all relevant sensors).
It will also be appreciated that all of these techniques can be applied both static or on the move. Moreover, these techniques can also employ unambiguous registration control patterns to ensure target area orientation is also visible to all sensors - this is particularly useful for airborne sensor platforms.
Previous registration methods are restricted by their reliance on information contained in the existing image space whereas the proposed method introduces additional information into that image space specifically designed to allow accurate registration of the image space from the perspective of all sensors in the system.
It will be appreciated that this invention can be applied to any computer vision system which uses image registration - be that temporal or spatial. This includes but is not limited to Medical, Topographical, Photographic and video image registration.