WO1987003719A1 - Inspection apparatus - Google Patents
Inspection apparatus Download PDFInfo
- Publication number
- WO1987003719A1 WO1987003719A1 PCT/GB1986/000765 GB8600765W WO8703719A1 WO 1987003719 A1 WO1987003719 A1 WO 1987003719A1 GB 8600765 W GB8600765 W GB 8600765W WO 8703719 A1 WO8703719 A1 WO 8703719A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- features
- inspection
- interest
- articles
- inspection apparatus
- Prior art date
Links
- 238000007689 inspection Methods 0.000 title claims abstract description 40
- 238000012545 processing Methods 0.000 claims abstract description 17
- 238000004458 analytical method Methods 0.000 claims abstract description 9
- 230000003287 optical effect Effects 0.000 claims description 8
- 230000001419 dependent effect Effects 0.000 claims description 4
- 230000006870 function Effects 0.000 description 13
- 238000000034 method Methods 0.000 description 12
- 235000013305 food Nutrition 0.000 description 5
- 238000012216 screening Methods 0.000 description 5
- 238000010276 construction Methods 0.000 description 4
- 238000005259 measurement Methods 0.000 description 4
- 238000003708 edge detection Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 238000013507 mapping Methods 0.000 description 3
- 235000015895 biscuits Nutrition 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000009826 distribution Methods 0.000 description 2
- 101100408787 Arabidopsis thaliana PNSL1 gene Proteins 0.000 description 1
- 241001137251 Corvidae Species 0.000 description 1
- 235000012970 cakes Nutrition 0.000 description 1
- 235000019219 chocolate Nutrition 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000008030 elimination Effects 0.000 description 1
- 238000003379 elimination reaction Methods 0.000 description 1
- 230000007717 exclusion Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 239000003292 glue Substances 0.000 description 1
- 230000036244 malformation Effects 0.000 description 1
- 238000012015 optical character recognition Methods 0.000 description 1
- 235000015108 pies Nutrition 0.000 description 1
- 235000013550 pizza Nutrition 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
- G06T2207/30128—Food products
Definitions
- This invention relates to industrial inspection apparatus and, in particular, to apparatus adapted for the rapid inspection of a plurality of components.
- IMP Image-handling Multi-Processor
- VME bus and crate which holds a set of special co-procesor boards including a frame store for image processing.
- the co-processors operate rapidly and enable the system to perform industrial inspection tasks in real time.
- the system is designed in an integrated way, so that the data transactions on the VME bus give only a limited overhead in terms of speed.
- the memory sub-systems are designed to operate at the maximum speed of which the VME bus is capable.
- inspection apparatus for the susccessive inspection of a plurality or articles having a common set of features of interest, comprising detector means for the positional location of individual articles, scanning means to capture an Image of a region in the vicinity of a located article and analysis means to analyse the significance of features detected by said scanning means, wherein selection means coupled to said positional location means is operable selectively to control the processing of data derived from said scanning means in order preferentially to obtain data relevant to said features of interest.
- an image location device for use in inspection apparatus for the successive inspection of a plurality of articles having a common set of features of interest, comprising detector means for deriving a plurality of electrical signals corresponding to the intensity of the optical signal at a plurality of positions in the vicinity of a predetermined position on an optical image, combining means for combining pairs of symmetrically- weighted groups of said electrical signals in accordance with a predetermined algorithm to derive a pair of electrical signals dependent on the intensity of the optical signals at positions in the vicinity of said predetermined" position and difference means to derive an electrical signal dependent on the difference between said pair of electrical signals.
- the IMP system is built around a frame store typically containing four image-planes of 128 x 128 bytes.
- the memory is configured so that access is as rapid as VME protocol will allow, viz 150 nsec. A special technique is used to achieve this throughput.
- the purpose of the IMP system is to permit industrial inspection tasks to be undertaken in real time and at reasonable expense. Images from a line-scan or TV (vidicon) camera are digitised and fed to the frame store under computer control: they are then processed rapidly by special processing boards. These contain co-processors which operate under control of a host computer and which can access image data autonomously via the VME bus. An arbitrator on the VME bus mediates between the host processor, the image display hardware and the various co-processors. IMP is a multi-processor system to the extent that a number of processors can perform their various functions concurrently, though only one processor may access the bus at any one time.
- the IMP system finds particular application for food product inspection.
- Considerable numbers of food products have the characteristic that they are round. In our inspection work we needed to ensure that we could locate such products rapidly and then scrutinise them effectively for crucial features and defects.
- Food products tend to be mass produced in large numbers on continuously moving product lines. These lines are typically 1-2 metres wide, and contain 12-20 products across the width of a conveyor, which may be moving at 30-50 cm per second. Thus the product flow rate past a given point in the line is typically 20 units per second.
- This means that one processor of the IMP type will have to cope with one item every 50 msec or so. It will have to find each product in the images it receives, and then scrutinise it.
- the IMP system can be used in an extremely wide range of industrial inspection and robotics applications.
- the object or some feature of it does not need to be located, since its presence and position is already known - e.g. in a robot gripper, at the bottom of a chute, or elsewhere.
- the IMP system is clearly capable of dealing with this simplified set of situations. Overall, it is seen that the IMP system is in practice of rather general utility.
- the functions presently performed by the various processors are: (1) edge detection and edge orientation; (2) construction of radial intensity histograms in the vicinity of special features; (3) counting of pixels in various ranges of intensity and with various tags already attached to .them; and (4) correlation of thresholded patterns against internally generated parameters such as distance from the feature of interest.
- Other features are: (5) construction of angular intensity histograms; (6) construction of overall object intensity histograms, plus (7) construction of intensity histograms within a specified area; and (8) more general grey-scale sub-image correlation.
- a novel feature that permits much of the processing to be carried out rapidly and efficiently is that of autoscan by a particular processor of an area of an image relative to a given starting point, coupled with the use of an internal bus which holds (x,y) and (r, ⁇ ) co-ordinates of the currently accessed pixel relative to the starting pixel: this is aided by use of a look-up table on the processor board giving information related to the pixel (x,y) or (r, ⁇ ) co-ordinates.
- the whole system is simply controlled in a high level language from a PDP-11 type of host processor, for which suitable interfaces have been devised.
- a 68000 or other host processors can be used.
- the important point here is that complex assembly level programming is not required at any stage.
- the only requirement imposed by the co-processors is that they should be initialised by RESET signals, that their operation should be started by suitable START pulses, and that certain information should be provided for them in specific registers and memory locations. Data may be read out of them by the host or other processors, or they may write their data to other locations. Control is via a minimal number of registers which may each be given a high level name for programming purposes.
- the co-processor functions listed above are special functions which have been found to take the bulk of the effort in practical industrial inspection systems. Since these functions are not actually general, they cannot on their own carry out the whole of an inspection task. Thus 'glue' functionality is required between the given functions. This may be provided by the host processor. If this gets slow or is overloaded, then several software co-processors may be added to the IMP system; at present this is envisaged to be possible via DEC T-11 processors, possibly working in conjunction with a bit-slice. It would not be possible for such processors to perform the whole function of IMP because they would operate too slowly for inspection purposes, unless at least 50 concurrently operating processors were added to the system.
- Figure 1 is a schematic view of an image-handling multiprocessor system
- FIG. 2 is a schematic drawing of Processor II from the IMP system of Figure 1;
- FIG. 3 is a schematic drawing of Processor Module A for Processor II;
- FIG. 4 is a schematic drawing of Processor Module B for Processor II
- Figure 5 is a schematic drawing of Processor Module C for Processor II
- FIG. 6 is a schematic drawing of Processor Module D for Processor III
- FIG. 7 is a schematic drawing of Processor Module E for Processor IV;
- FIG. 8 is a schematic drawing of Processor Module F for Processor IV;
- product location may conveniently be carried out using the generalised Hough transform. This may be performed directly on grey-scale images, for objects whose shapes are determined only from a look-up table.
- edge location In order to perform the Hough transform, edge location must be carried out.
- Processor I has been designed for this purpose, and is therefore able to deliver data from which special reference points in an image containing product may be found. It is crucial to IMP that starting reference points should be located because of the general nature of the Hough transform. It has been found that special reference points can normally be located in an image containing product, and hence this constitutes a sufficiently general procedure to form the basic starting point for the process of inspection.
- Processor II carries out this function, using an autoscanning module which scans systematically over the region of interest. Scanning over a sub-area of the image is profitable for two reasons: (1) it speeds up processing by eliminating irrelevant areas; (2) it enables a variety of matching processes to be carried out, since the reference point will be of one or other standard type: this means that comparison can be made with previously compiled data sets.
- the remainder of Processor II contains modules which aid methods by which matching may be achieved.
- Processor II contains an internal bus which carries information about the position (x,y) of the current pixel relative to that of the reference point.
- this internal bus carries information on the (r, ⁇ ) co-ordinates of the current pixel relative to the reference point, which it has obtained from a downloadable look-up table held in RAM.
- the look-up table may contain several sets of additional information, each set being relevant to a particular type of object or feature.
- the additional information may include data on the ideal size of a particular type of object, or other details.
- Processor II may scan the image successively looking at several objects of various types, and placing output information on each in its output memory. (The latter arrangement will often be used to save time on the VME bus.)
- Processor I carries out a vital edge detection function: in fact it is optimised for the computation of Hough transforms. For this purpose it is insufficient to compute, and threshold edge magnitude - edge orientation also has to be determined locally. Processor I is designed to determine both edge magnitude and edge orientation. In addition Processor I is used in a novel and unique manner, namely thresholding edge magnitude at a high level. The reason for this is (a) to find a significantly reduced number of edge points, thereby speeding up the whole IMP system, and (b) ensuring that the edge points that are located are of increased accuracy relative to an average edge point. This strategy is enhanced by incorporating within Processor I a double threshold on the pixel intensity value, so that points which are not half way up the intensity scale are eliminated, thereby speeding up processing further.
- Processor I has a more complex autoscan unit than Processor II, since it employs a 3 x 3 pixel window instead of a 1 x 1 window. It achieves addional speed-up by (a) saving input pixel data from the previous two pixels (i.e. it only takes in a new 1 x 3 sub-window for every new pixel), (b) pipelining its computation, and (c) saving its output data in its own local high-speed memory, where it is still accessable by the host processor.
- level 3 executive (host) processor, which acts as system controller level 2 hardwired and micro-coded co-processors, including Processor I and Processor II level 1 : software co-processors, including DEC T-11 processors level 0: video display circuitry
- Video display circuitry has the lowest priority, and is thus able to display images from VME bus memory only when no other activity is occurring on the bus.
- Bus grants to processors at the same level of priority are daisy-chained, and those closest to the arbitrator module have highest resulting priority.
- PPL2 notation for pixels within a 5 x 5 window is:
- look-up table required for re-mapping is that required to combine X with 6 bits of window placement information, and similarly for Y.
- apparatus in accordance with the invention may be used for processing signals from an edge detector. This enhances speed of processing by rapid selection of pixels which provide accurate orientation and location information and ignoring other pixels.
- the principle that is used for selecting pixels giving high location and orientation accuracy is to look for those pixels where the intensity gradient is very uniform. This may be achieved by thresholding an intensity gradient uniformity parameter at high level or a non-uniformity parameter at low level. This may be achieved by taking two symmetrically-weighted sums of pixels near a pixel location under consideration. Advantageously, these may be re-weighted so that they will be exactly equal if the locality has an intensity gradient which is exactly uniform. The difference of the sums provides a convenient non-uniformity parameter which may be detected by a threshold detector set to a convenient value.
- Edge orientation may be deduced from the relative values of g and g using the arctangent function. Symmetric sums of pixel values that may be used for computing gradient uniformity are
- the effect of the uniformity detector is to remove from consideration by a later object locator a good proportion of those edge points where noise is significant or the edge orientation measurement would be otherwise innacurate and at the same timespeed up the algorithm. It will, furthermore, operate in any size of neighbourhood, not just the 3 x 3 one illustrated by the Sobell edge detector. It is not restricted to use with a Sobell edge detector - others such as a Prewett 3 x 3 edge detector may be employed. Other sets of symmetric combinations of sums of weights should be employed and any linear or non-linear combination of these could be used to detect non-uniformity. (E.g. in the 3 x 3 case one could well use u 1 + u 2 + u 3 .) In larger neighbourhoods there are many possible uniformity operators.
- the uniformity detector is the elimination of noisy locations. Another is the exclusion from consideration of points which are not in their expected position due, for example, to malformation of an object which is being inspected.
- the uniformity detector would also be able to improve edge orientation accuracy by eliminating cases when a 'step' edge didn't pass closely enough through the centre of a neighbourhood. It here is any offset, accuracy deteriorates but the uniformity operator improves the probability of detecting this.
- the uniformity detector may be used to eliminate edge (or apparent edges) locations which are subject to noise.
- This noise can arise within any of the pixels in the neighbourhood: e.g. with a Sobel detector, noise in any one of the pixesl In the neighbourhood (except the central one!) would reduce the edge orientation accuracy, and the iniformity operator could attempt to detect this. (Of course, it might sometimes fail - e.g. if two pixels were subject to noise and the effect cancelled itself out in the uniformity of the operator but not in the edge orientation operator itself)
- the screener can act by pre-screening or post-screening or simply in parallel. Parallel screening would be appropriate If special VLSI chips were to be designed, since this would be the fastest operating option. Otherwise pre- or post-screening would be useful for progressively cutting down the data to be handled by a later object detector. Note also that the whole thing could be done in one VLSI chip, rather than having one such chip for edge detection and one for uniformity screening. The important point is that the uniformity detector can be used to speed up the object detection algorithm by reducing the number of points considered.
- the invention permits use of radial histograms and correlation on the fly.
- the computer recalculates edge points on the basis of stored data and of measurements from the scanner. Because it is aware of the background points it does not take them into consideration.
- the invention finds particular application in industrial inspection of mass produced products such as food items (biscuits, pizzas, cakes and pies and other products of uniform size and shape). It may also be used for the forensic tasks such as number plate recognition and for optical mark and optical character recognition. It is not restricted to optical techniques for image capture and may employ other methods, such as sonar, infra-red or tactile sensing.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
Inspection apparatus for the successive inspection of a plurality of articles having a common set of features of interest comprises an edge detector for the positional location of individual articles. The edge detector is fed from a scanner which captures an image of a region in the vicinity of a located article. It analyses the significance of features detected by the scanner and selectively controls the processing of the derived data in order preferentially to obtain data relevant to the features of interest, thereby speeding up processing time.
Description
INSPECTION APPARATUS This invention relates to industrial inspection apparatus and, in particular, to apparatus adapted for the rapid inspection of a plurality of components.
Industrial inspection involves the identification, location, counting, scrunity and measurement of products and components, often under conditions where they are moving at moderate speed along a conveyor system. Products therefore have to be examined in real time, and this imposes a major difficulty, since the processing rate required to analyse the necessary number of pixels is considerably more than can be coped with by a single conventional serial processor. In practice the processing rate is 50-100 times faster than a single central processor unit can cope with so special hardware has to be designed for the purpose.
We have devised an Image-handling Multi-Processor (IMP) for this purpose. IMP consists of a Versatile Modular Eurocard VME bus and crate, which holds a set of special co-procesor boards including a frame store for image processing. The co-processors operate rapidly and enable the system to perform industrial inspection tasks in real time. The system is designed in an integrated way, so that the data transactions on the VME bus give only a limited overhead in terms of speed. In addition, the memory sub-systems are designed to operate at the maximum speed of which the VME bus is capable.
According to the present invention there is provided inspection apparatus for the susccessive inspection of a plurality or articles having a common set of features of interest, comprising detector means for the positional location of individual articles, scanning means to capture an Image of a region in the vicinity of a located article and analysis means to analyse the significance of features detected by said scanning means, wherein selection means coupled to said positional location means is operable selectively to
control the processing of data derived from said scanning means in order preferentially to obtain data relevant to said features of interest.
There is also provided an image location device for use in inspection apparatus for the succesive inspection of a plurality of articles having a common set of features of interest, comprising detector means for deriving a plurality of electrical signals corresponding to the intensity of the optical signal at a plurality of positions in the vicinity of a predetermined position on an optical image, combining means for combining pairs of symmetrically- weighted groups of said electrical signals in accordance with a predetermined algorithm to derive a pair of electrical signals dependent on the intensity of the optical signals at positions in the vicinity of said predetermined" position and difference means to derive an electrical signal dependent on the difference between said pair of electrical signals.
The IMP system is built around a frame store typically containing four image-planes of 128 x 128 bytes. The memory is configured so that access is as rapid as VME protocol will allow, viz 150 nsec. A special technique is used to achieve this throughput.
The purpose of the IMP system is to permit industrial inspection tasks to be undertaken in real time and at reasonable expense. Images from a line-scan or TV (vidicon) camera are digitised and fed to the frame store under computer control: they are then processed rapidly by special processing boards. These contain co-processors which operate under control of a host computer and which can access image data autonomously via the VME bus. An arbitrator on the VME bus mediates between the host processor, the image display hardware and the various co-processors. IMP is a multi-processor system to the extent that a number of processors can perform their various functions concurrently, though only one processor may access the bus at any one time. This is not as
severe a constraint as might be thought, since (a) pipelining of processes together with provision of local memory enables use to be made of the parallel processing capability of the co-processors; and (b) many real time industrial inspection applications demand speeds that are high but not so high that careful design of the processors will not permit useful tasks to be done by this sort of system. It should be noted that the design of the co-processor system has in this case not only been careful but also ingenious, so that the capability of IMP is substantially greater than that of many commercially available systems, while being significantly less complex and expensive.
The IMP system finds particular application for food product inspection. Considerable numbers of food products have the characteristic that they are round. In our inspection work we needed to ensure that we could locate such products rapidly and then scrutinise them effectively for crucial features and defects. Food products tend to be mass produced in large numbers on continuously moving product lines. These lines are typically 1-2 metres wide, and contain 12-20 products across the width of a conveyor, which may be moving at 30-50 cm per second. Thus the product flow rate past a given point in the line is typically 20 units per second. This means that one processor of the IMP type will have to cope with one item every 50 msec or so. It will have to find each product in the images it receives, and then scrutinise it. An adequate resolution for a single product will be such that the product occupies a square of side 60 to 80 pixels. Thus location of the product is non-trivial, and scrunity is less trivial. It is seen that this sort of problem involves a lot of pixel accesses, but this need not be sufficient to 'tie up' the VME bus and cause data-flow problems.
Considerable attention has heen devoted to the algorithms for locating and scrutinising the product and the types of processor needed to implement them. Although algorithms originally developed
for round food product inspection were the first to be implemented in hardware, they have now been generalised so that they are suitable for a much wider variety of products. First, any products which are round or have round holes can be located rapidly and with ease. Second, all such products can be inspected and closely scrutinised. Third, a number of features even of non-circular products can be located directly in their own right, and then products scrutinised in the vicinity of these features, or at related positions. A characteristic of the system is that it relies on recognition of certain features which are capable of triggering the remainder of the system. In our experience, there are a very large number of object features that can simply be located - including holes, specks, dots, characters, corners, line intersections and so on. Thus the IMP system can be used in an extremely wide range of industrial inspection and robotics applications. In addition, there are cases when the object or some feature of it does not need to be located, since its presence and position is already known - e.g. in a robot gripper, at the bottom of a chute, or elsewhere. The IMP system is clearly capable of dealing with this simplified set of situations. Overall, it is seen that the IMP system is in practice of rather general utility. The functions presently performed by the various processors are: (1) edge detection and edge orientation; (2) construction of radial intensity histograms in the vicinity of special features; (3) counting of pixels in various ranges of intensity and with various tags already attached to .them; and (4) correlation of thresholded patterns against internally generated parameters such as distance from the feature of interest. Other features are: (5) construction of angular intensity histograms; (6) construction of overall object intensity histograms, plus (7) construction of intensity histograms within a specified area; and (8) more general grey-scale sub-image correlation. These functions permit the rapid estimatation of product centres, and the measurement of perimeter and area, e.g. that of chocolate cover over a biscuit.
A novel feature that permits much of the processing to be carried out rapidly and efficiently is that of autoscan by a particular processor of an area of an image relative to a given starting point, coupled with the use of an internal bus which holds (x,y) and (r,θ) co-ordinates of the currently accessed pixel relative to the starting pixel: this is aided by use of a look-up table on the processor board giving information related to the pixel (x,y) or (r,θ) co-ordinates.
Because the system is designed to use simple functions that can be relocated in the image at will, it is highly flexible and efficient.
In addition to these advantages, the whole system is simply controlled in a high level language from a PDP-11 type of host processor, for which suitable interfaces have been devised. Alternatively, a 68000 or other host processors can be used. The important point here is that complex assembly level programming is not required at any stage. The only requirement imposed by the co-processors is that they should be initialised by RESET signals, that their operation should be started by suitable START pulses, and that certain information should be provided for them in specific registers and memory locations. Data may be read out of them by the host or other processors, or they may write their data to other locations. Control is via a minimal number of registers which may each be given a high level name for programming purposes. The co-processor functions listed above are special functions which have been found to take the bulk of the effort in practical industrial inspection systems. Since these functions are not actually general, they cannot on their own carry out the whole of an inspection task. Thus 'glue' functionality is required between the given functions. This may be provided by the host processor. If this gets slow or is overloaded, then several software co-processors may be added to the IMP system; at present this is envisaged to be possible via DEC T-11 processors, possibly working
in conjunction with a bit-slice. It would not be possible for such processors to perform the whole function of IMP because they would operate too slowly for inspection purposes, unless at least 50 concurrently operating processors were added to the system. The IMP system is intended to overcome the need for this sort of solution by doing the bulk of the processing much more economically: thus a brute-force solution is replaced by a clever solution. However, for generality, one or two additional software co-processors form a useful adjunct to the remainder of the hardware. An embodiment of this invention will now be described by way of example with reference to the accompanying drawings in which:-
Figure 1 is a schematic view of an image-handling multiprocessor system;
Figure 2 is a schematic drawing of Processor II from the IMP system of Figure 1;
Figure 3 is a schematic drawing of Processor Module A for Processor II;
Figure 4 is a schematic drawing of Processor Module B for Processor II; Figure 5 is a schematic drawing of Processor Module C for Processor II;
Figure 6 is a schematic drawing of Processor Module D for Processor III;
Figure 7 is a schematic drawing of Processor Module E for Processor IV;
Figure 8 is a schematic drawing of Processor Module F for Processor IV;
In order to carry out the inspection task, products first have to be located. Advantageously inspection can be carried out in two stages: (1) product location and (2) product scrutiny. Product location may conveniently be carried out using the generalised Hough transform. This may be performed directly on grey-scale images, for objects whose shapes are determined only
from a look-up table. In order to perform the Hough transform, edge location must be carried out. Processor I has been designed for this purpose, and is therefore able to deliver data from which special reference points in an image containing product may be found. It is crucial to IMP that starting reference points should be located because of the general nature of the Hough transform. It has been found that special reference points can normally be located in an image containing product, and hence this constitutes a sufficiently general procedure to form the basic starting point for the process of inspection.
Given that certain reference points have been located in an image, the next stage of inspection is to analyse the image in their vicinity, thereby enabling objects to be scrutinised. Processor II carries out this function, using an autoscanning module which scans systematically over the region of interest. Scanning over a sub-area of the image is profitable for two reasons: (1) it speeds up processing by eliminating irrelevant areas; (2) it enables a variety of matching processes to be carried out, since the reference point will be of one or other standard type: this means that comparison can be made with previously compiled data sets. The remainder of Processor II contains modules which aid methods by which matching may be achieved. Processor II contains an internal bus which carries information about the position (x,y) of the current pixel relative to that of the reference point. In particular this internal bus carries information on the (r,θ) co-ordinates of the current pixel relative to the reference point, which it has obtained from a downloadable look-up table held in RAM. Only one reference point may be used at any one time, but the look-up table may contain several sets of additional information, each set being relevant to a particular type of object or feature. For example, the additional information may include data on the ideal size of a particular type of object, or other details. Thus Processor II may scan the image successively looking at several objects of various types,
and placing output information on each in its output memory. (The latter arrangement will often be used to save time on the VME bus.)
Use of the internal (x,y)/(r,θ)/information bus (which has been designated as the 'Relative Location' or RL-bus) feeds data to the various Modules within Processor II. In particular, these can build up valuable radial and angular intensity histograms which provide rapid means of comparing the region near a special reference point with that expected for a ideal object. Normal intensity histograms of various ranges can also be generated, to aid this analysis, and correlations can be performed for the distribution of particular intensity values, relative to standard distributions. The emphasis here is on rapidly scrutinising specific regions of the image in various standard ways, in order to save the host processor from the bulk of the processing. The host processor is still permitted to interrogate part of the image when Processor II leaves any detail unclear.
Processor I carries out a vital edge detection function: in fact it is optimised for the computation of Hough transforms. For this purpose it is insufficient to compute, and threshold edge magnitude - edge orientation also has to be determined locally. Processor I is designed to determine both edge magnitude and edge orientation. In addition Processor I is used in a novel and unique manner, namely thresholding edge magnitude at a high level. The reason for this is (a) to find a significantly reduced number of edge points, thereby speeding up the whole IMP system, and (b) ensuring that the edge points that are located are of increased accuracy relative to an average edge point. This strategy is enhanced by incorporating within Processor I a double threshold on the pixel intensity value, so that points which are not half way up the intensity scale are eliminated, thereby speeding up processing further.
Processor I has a more complex autoscan unit than Processor II, since it employs a 3 x 3 pixel window instead of a 1 x 1 window.
It achieves addional speed-up by (a) saving input pixel data from the previous two pixels (i.e. it only takes in a new 1 x 3 sub-window for every new pixel), (b) pipelining its computation, and (c) saving its output data in its own local high-speed memory, where it is still accessable by the host processor.
The priority levels on the VME bus, starting with the highest, are: level 3: executive (host) processor, which acts as system controller level 2 hardwired and micro-coded co-processors, including Processor I and Processor II level 1 : software co-processors, including DEC T-11 processors level 0: video display circuitry
Software co-processors are more likely to be intelligent than hardware co-processors, since it is easier to build more complex functionality into software than into hardware. Thus hardware co-processors may not be interruptable and should if necessary be permitted to complete their assigned operations. Therefore they are assigned priority level 2 rather than level 1. Video display circuitry has the lowest priority, and is thus able to display images from VME bus memory only when no other activity is occurring on the bus.
Bus grants to processors at the same level of priority are daisy-chained, and those closest to the arbitrator module have highest resulting priority. During algorithm development, or in an industrial system when speed of processing is not at a premium, high level language notation of pixels is useful. The PPL2 notation for pixels within a 5 x 5 window is:
P15 P14 P13 P12 P11
P16 P4 P3 P2 P10
P17 P5 P0 P1 P9
P18 P6 P7 P8 P24
P19 P20 P21 P22 P23
In order to employ this notation for pixels around the location (X,Y) in an image, it is necessary to perform a re-mapping operation. If this is carried out purely in software it will slow access to a miserable level. In the IMP frame store re-mapplng is carried out automatically in a look-up table: IMP actually copes with windows of size up to 7 x 7 by this method, rather than 5 x 5 as in the above example. Clearly, the look-up operation will reduce speed slightly. However, for the two instances cited above, the minimal reduction in speed resulting from direct RAM look-up will be immaterial, and the gains in ease of programming will be very worthwhile. In fact the automatic re-mapping procedure adopted here has the advantage of using absolute rather than Indexed addressing, which itself leads to a speed-up in- pixel access: this is not seen in currently available commercial systems where (say) a 68000 has direct access to all the image data in a huge block of contiguous memory.
The size of look-up table required for re-mapping is that required to combine X with 6 bits of window placement information, and similarly for Y. For a 128 x 128 frame store this means two look-up tables each having 7 + 5 = 13 address bits and 7 co-ordinate data bits plus 1 over-range data bit: and for a 256 x 256 frame store it means two tables have 8 + 6 = 14 address bits and 8 + 1 data bits. For a 128 x 128 frame store two 8K x 8 EPROMs are sufficient for the purpose. Advantageously, apparatus in accordance with the invention may be used for processing signals from an edge detector. This enhances speed of processing by rapid selection of pixels which provide accurate orientation and location information and ignoring other pixels. The principle that is used for selecting pixels giving high location and orientation accuracy is to look for those pixels where the intensity gradient is very uniform. This may be achieved by thresholding an intensity gradient uniformity parameter at high level or a non-uniformity parameter at low level. This
may be achieved by taking two symmetrically-weighted sums of pixels near a pixel location under consideration. Advantageously, these may be re-weighted so that they will be exactly equal if the locality has an intensity gradient which is exactly uniform. The difference of the sums provides a convenient non-uniformity parameter which may be detected by a threshold detector set to a convenient value.
The method will be illustrated for a 3 x 3 neighbourhood, where the Sobel operator is being used to detect edges. Using the following notation to describe the pixel intensities in the neighbourhood
we estimate the (Sobel) x and y components of Intensity gradient as
gx = (C + 2F + I) - (A + 2D + G) gy = (A + 2B + C) - (C + 2H + I)
and intensity gradient can then be estimated as
g = [gx 2 + gy 2]½
or by a suitable approximation. Edge orientation may be deduced from the relative values of g and g using the arctangent function.
Symmetric sums of pixel values that may be used for computing gradient uniformity are
s1 = A + C + G + I s2 = B + D + F + H s3 = 4E
Thus possible non-uniformity parameters would be
u1 = s2 = s3 u2 = s3 - s1
u1 = s1 - s2
It will be clear to one skilled In the art that the effect of the uniformity detector is to remove from consideration by a later object locator a good proportion of those edge points where noise is significant or the edge orientation measurement would be otherwise innacurate and at the same timespeed up the algorithm. It will, furthermore, operate in any size of neighbourhood, not just the 3 x 3 one illustrated by the Sobell edge detector. It is not restricted to use with a Sobell edge detector - others such as a Prewett 3 x 3 edge detector may be employed. Other sets of symmetric combinations of sums of weights should be employed and any linear or non-linear combination of these could be used to detect non-uniformity. (E.g. in the 3 x 3 case one could well use u1 + u2 + u3.) In larger neighbourhoods there are many possible uniformity operators.
One function of the uniformity detector is the elimination of noisy locations. Another is the exclusion from consideration of points which are not in their expected position due, for example, to malformation of an object which is being inspected. The uniformity detector would also be able to improve edge orientation accuracy by eliminating cases when a 'step' edge didn't pass
closely enough through the centre of a neighbourhood. It here is any offset, accuracy deteriorates but the uniformity operator improves the probability of detecting this.
The uniformity detector may be used to eliminate edge (or apparent edges) locations which are subject to noise. This noise can arise within any of the pixels in the neighbourhood: e.g. with a Sobel detector, noise in any one of the pixesl In the neighbourhood (except the central one!) would reduce the edge orientation accuracy, and the iniformity operator could attempt to detect this. (Of course, it might sometimes fail - e.g. if two pixels were subject to noise and the effect cancelled itself out in the uniformity of the operator but not in the edge orientation operator itself)
It will furthermore be apparent that the screener can act by pre-screening or post-screening or simply in parallel. Parallel screening would be appropriate If special VLSI chips were to be designed, since this would be the fastest operating option. Otherwise pre- or post-screening would be useful for progressively cutting down the data to be handled by a later object detector. Note also that the whole thing could be done in one VLSI chip, rather than having one such chip for edge detection and one for uniformity screening. The important point is that the uniformity detector can be used to speed up the object detection algorithm by reducing the number of points considered.
Other features are that the invention permits use of radial histograms and correlation on the fly. By defining crucial areas in the picture which are relevant for hardware pre-selects for computer removal of redundant information it creates a progressive hierarchy of extraction of data and thus speeds up the inspection process. It does not waste time because it knows where to check any relevant features of image, which are not necessarily only edge points. The computer recalculates edge points on the basis of stored data and of measurements from the scanner. Because it is aware of the background points it does not take them into consideration.
The invention finds particular application in industrial inspection of mass produced products such as food items (biscuits, pizzas, cakes and pies and other products of uniform size and shape). It may also be used for the forensic tasks such as number plate recognition and for optical mark and optical character recognition. It is not restricted to optical techniques for image capture and may employ other methods, such as sonar, infra-red or tactile sensing.
16L
Claims
1. Inspection apparatus for the successive inspection of a plurality of articles having a common set of features of interest, comprising detector means for the positional location of individual articles, characterised in that it includes scanning means to capture an image of a region in the vicinity of a located article and analysis means to analyse the significance of features detected by said scanning means, wherein selection means coupled to said positional location means is operable selectively to control the processing of data derived from said scanning means in order preferentially to obtain data relevant to said features of interest.
2. Inspection apparatus for the successive inspection of a plurality of articles having a common set of features of interest as claimed in Claim 1 characterised in that said selection means includes an intensity gradient uniformity detector which serves as an intensity threshold detector.
3. Inspection apparatus for the successive inspection of a plurality of articles having a common set of features of interest as claimed in Claim 2 characterised in that it includes rejection means to reject from further analysis the signals derived from certain of said features.
4. Inspection apparatus for the successive inspection of a plurality of articles having a common set of features of interest as claimed in Claim 3 characterised in that the rejection means is adapted to reject signals which are subject to noise greater than a predetermined level.
5. Inspection apparatus for the successive inspection of a plurality of articles having a common set of features of interest as claimed in Claim 3 characterised in that the rejection means is adapted to reject signals which to not conform to a predetermined profile.
6. Inspection apparatus for the successive inspection of a plurality of articles having a common set of features of interest as claimed in Claim 5 characterised in that the profile is determined by the location of a step edge relative to the centre of a population of measured Intensities.
7. An image location device for use in inspection apparatus in accordance with any one of the preceding claims characterised in that it comprises detecor means for deriving a plurality of electric signals corresponding to the intensity of the optical signal at a plurality of positions in the vicinity of a predetermined position on an optical image, combining means for combining pairs of symmetrically-weighted groups of said electrical signals in accordance with a predetermined algorithm to derive a pair of electrical signals dependent on the intensity of the optical signals at positions in the vicinity of said predetermined position and difference means to derive an electrical signal dependent on the difference between said pair of electrical signals.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP87500961A JPS63503332A (en) | 1985-12-16 | 1986-12-16 | Inspection equipment |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB8530928 | 1985-12-16 | ||
GB8530929 | 1985-12-16 | ||
GB858530929A GB8530929D0 (en) | 1985-12-16 | 1985-12-16 | Inspection apparatus |
GB858530928A GB8530928D0 (en) | 1985-12-16 | 1985-12-16 | Image enhancer |
Publications (1)
Publication Number | Publication Date |
---|---|
WO1987003719A1 true WO1987003719A1 (en) | 1987-06-18 |
Family
ID=26290124
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/GB1986/000765 WO1987003719A1 (en) | 1985-12-16 | 1986-12-16 | Inspection apparatus |
Country Status (4)
Country | Link |
---|---|
EP (1) | EP0289500A1 (en) |
JP (1) | JPS63503332A (en) |
GB (1) | GB2184233A (en) |
WO (1) | WO1987003719A1 (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0355377A1 (en) * | 1988-08-05 | 1990-02-28 | Siemens Aktiengesellschaft | Method for testing optically flat electronic component assemblies |
EP0363525A1 (en) * | 1988-10-17 | 1990-04-18 | Siemens Aktiengesellschaft | Method of recognising the two-dimensional position and orientation of already known objects |
EP0364614A1 (en) * | 1988-10-17 | 1990-04-25 | Siemens Aktiengesellschaft | Method of recognising the spatial position and orientation of already known objects |
Families Citing this family (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB8908507D0 (en) * | 1989-04-14 | 1989-06-01 | Fokker Aircraft Bv | Method of and apparatus for non-destructive composite laminatecharacterisation |
GB0318733D0 (en) * | 2003-08-11 | 2003-09-10 | Icerobotics Ltd | Improvements in or relating to milking machines |
US9161511B2 (en) | 2010-07-06 | 2015-10-20 | Technologies Holdings Corp. | Automated rotary milking system |
US8720382B2 (en) | 2010-08-31 | 2014-05-13 | Technologies Holdings Corp. | Vision system for facilitating the automated application of disinfectant to the teats of dairy livestock |
US10111401B2 (en) | 2010-08-31 | 2018-10-30 | Technologies Holdings Corp. | System and method for determining whether to operate a robot in conjunction with a rotary parlor |
US8800487B2 (en) | 2010-08-31 | 2014-08-12 | Technologies Holdings Corp. | System and method for controlling the position of a robot carriage based on the position of a milking stall of an adjacent rotary milking platform |
US9149018B2 (en) | 2010-08-31 | 2015-10-06 | Technologies Holdings Corp. | System and method for determining whether to operate a robot in conjunction with a rotary milking platform based on detection of a milking claw |
US8671885B2 (en) | 2011-04-28 | 2014-03-18 | Technologies Holdings Corp. | Vision system for robotic attacher |
US9681634B2 (en) | 2011-04-28 | 2017-06-20 | Technologies Holdings Corp. | System and method to determine a teat position using edge detection in rear images of a livestock from two cameras |
US8746176B2 (en) | 2011-04-28 | 2014-06-10 | Technologies Holdings Corp. | System and method of attaching a cup to a dairy animal according to a sequence |
US9058657B2 (en) | 2011-04-28 | 2015-06-16 | Technologies Holdings Corp. | System and method for filtering data captured by a 3D camera |
US8903129B2 (en) | 2011-04-28 | 2014-12-02 | Technologies Holdings Corp. | System and method for filtering data captured by a 2D camera |
US10127446B2 (en) | 2011-04-28 | 2018-11-13 | Technologies Holdings Corp. | System and method for filtering data captured by a 2D camera |
US9258975B2 (en) | 2011-04-28 | 2016-02-16 | Technologies Holdings Corp. | Milking box with robotic attacher and vision system |
US9215861B2 (en) | 2011-04-28 | 2015-12-22 | Technologies Holdings Corp. | Milking box with robotic attacher and backplane for tracking movements of a dairy animal |
US10357015B2 (en) | 2011-04-28 | 2019-07-23 | Technologies Holdings Corp. | Robotic arm with double grabber and method of operation |
US9161512B2 (en) | 2011-04-28 | 2015-10-20 | Technologies Holdings Corp. | Milking box with robotic attacher comprising an arm that pivots, rotates, and grips |
US9265227B2 (en) | 2011-04-28 | 2016-02-23 | Technologies Holdings Corp. | System and method for improved attachment of a cup to a dairy animal |
US9049843B2 (en) | 2011-04-28 | 2015-06-09 | Technologies Holdings Corp. | Milking box with a robotic attacher having a three-dimensional range of motion |
US8885891B2 (en) | 2011-04-28 | 2014-11-11 | Technologies Holdings Corp. | System and method for analyzing data captured by a three-dimensional camera |
US9107379B2 (en) | 2011-04-28 | 2015-08-18 | Technologies Holdings Corp. | Arrangement of milking box stalls |
US9107378B2 (en) | 2011-04-28 | 2015-08-18 | Technologies Holdings Corp. | Milking box with robotic attacher |
US9043988B2 (en) | 2011-04-28 | 2015-06-02 | Technologies Holdings Corp. | Milking box with storage area for teat cups |
US8683946B2 (en) | 2011-04-28 | 2014-04-01 | Technologies Holdings Corp. | System and method of attaching cups to a dairy animal |
US9357744B2 (en) | 2011-04-28 | 2016-06-07 | Technologies Holdings Corp. | Cleaning system for a milking box stall |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0149457A2 (en) * | 1984-01-13 | 1985-07-24 | Kabushiki Kaisha Komatsu Seisakusho | Method of identifying contour lines |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
FR2050849A5 (en) * | 1969-06-26 | 1971-04-02 | Automatisme Cie Gle | |
BE789062A (en) * | 1971-09-23 | 1973-01-15 | Nederlanden Staat | AUTOMATIC ADDRESS DETECTION |
CA1098209A (en) * | 1975-10-20 | 1981-03-24 | Billy J. Tucker | Apparatus and method for parts inspection |
EP0041870B1 (en) * | 1980-06-10 | 1986-12-30 | Fujitsu Limited | Pattern position recognition apparatus |
EP0081322A3 (en) * | 1981-12-04 | 1986-07-30 | British Robotic Systems Limited | Component identification systems |
US4685141A (en) * | 1983-12-19 | 1987-08-04 | Ncr Canada Ltd - Ncr Canada Ltee | Method and system for finding image data associated with the monetary amount on financial documents |
-
1986
- 1986-12-16 JP JP87500961A patent/JPS63503332A/en active Pending
- 1986-12-16 WO PCT/GB1986/000765 patent/WO1987003719A1/en not_active Application Discontinuation
- 1986-12-16 GB GB08630026A patent/GB2184233A/en not_active Withdrawn
- 1986-12-16 EP EP87900206A patent/EP0289500A1/en not_active Withdrawn
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0149457A2 (en) * | 1984-01-13 | 1985-07-24 | Kabushiki Kaisha Komatsu Seisakusho | Method of identifying contour lines |
Non-Patent Citations (3)
Title |
---|
Computer Graphics and Image Processing, Volume 17, No. 2, October 1981, (New York, US), W.A. PERKINS: "Using Circular Symmetry and Intensity for Computer Vision Inspection", pages 161-172, see pages 162-168, chapter 2 * |
Computer Graphics and Image Processing, Volume 8, No. 10, October 1978, (New York, US), M.J. BROOKS: "Rationalizing Edge Detectors", pages 277-285, see page 281, lines 5-9 * |
IEE Proceedings, Volume 132, No. 4, part D, July 1985, (Stevenage, Herts, GB), E.R. DAVIES et al.: "Radial Histograms as an Aid in the Inspection of Circular Objects", pages 158-163, see pages 158-159, chapter 2 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0355377A1 (en) * | 1988-08-05 | 1990-02-28 | Siemens Aktiengesellschaft | Method for testing optically flat electronic component assemblies |
EP0363525A1 (en) * | 1988-10-17 | 1990-04-18 | Siemens Aktiengesellschaft | Method of recognising the two-dimensional position and orientation of already known objects |
EP0364614A1 (en) * | 1988-10-17 | 1990-04-25 | Siemens Aktiengesellschaft | Method of recognising the spatial position and orientation of already known objects |
Also Published As
Publication number | Publication date |
---|---|
GB8630026D0 (en) | 1987-01-28 |
JPS63503332A (en) | 1988-12-02 |
GB2184233A (en) | 1987-06-17 |
EP0289500A1 (en) | 1988-11-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO1987003719A1 (en) | Inspection apparatus | |
US5318173A (en) | Hole sorting system and method | |
EP0122543B1 (en) | Method of image processing | |
US5305894A (en) | Center shot sorting system and method | |
US4167728A (en) | Automatic image processor | |
US5933519A (en) | Cytological slide scoring apparatus | |
KR20060100376A (en) | Method and Image Processing Apparatus for Analyzing Object Contour Image, Method and Image Processing Apparatus for Detecting Object, Industrial Vision Apparatus, Smart Camera, Image Display, Security System, and Computer Program Products | |
JP2000011089A (en) | Binarizing method for optical character recognition system | |
US5305398A (en) | Method and apparatus for scaling image data | |
US5140444A (en) | Image data processor | |
CN113751332A (en) | Visual inspection system and method of inspecting parts | |
US4246570A (en) | Optical wand for mechanical character recognition | |
EP1218851A2 (en) | System and method for locating color and pattern match regions in a target image | |
US5689580A (en) | Printed letter inspecting apparatus for solid objects | |
EP0525318A2 (en) | Moving object tracking method | |
US7881538B2 (en) | Efficient detection of constant regions of an image | |
CN115249308A (en) | A method, apparatus, electronic device and readable storage medium for object classification | |
US5020124A (en) | Method and apparatus for detecting document size in an imaging system | |
CN117470872B (en) | Board splitting quality detection method and device, board splitting machine and circuit board production line | |
Barth et al. | Attentive sensing strategy for a multiwindow vision architecture | |
JP3675366B2 (en) | Image extraction processing device | |
JPS63276300A (en) | Bend detection system for pin | |
JPH0490078A (en) | Centroid detector | |
WO1991006065A2 (en) | Image data processor system | |
JPH0421078A (en) | Pattern matching processor |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): JP US |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): CH DE FR NL |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1987900206 Country of ref document: EP |
|
WWP | Wipo information: published in national office |
Ref document number: 1987900206 Country of ref document: EP |
|
WWW | Wipo information: withdrawn in national office |
Ref document number: 1987900206 Country of ref document: EP |