Open eVision Image Pre-Processing Guide
Open eVision Image Pre-Processing Guide
Open eVision
Image Pre-Processing Libraries
© EURESYS s.a. 2019 - Document D121ET-Using Image Pre-Processing Libraries C++-Open eVision-2.7.1.1114 built on 2019-03-07
Open eVision User Guide
Terms of Use
EURESYS s.a. shall retain all property rights, title and interest of the documentation of the hardware and the
software, and of the trademarks of EURESYS s.a.
All the names of companies and products mentioned in the documentation may be the trademarks of their
respective owners.
The licensing, use, leasing, loaning, translation, reproduction, copying or modification of the hardware or the
software, brands or documentation of EURESYS s.a. contained in this book, is not allowed without prior notice.
EURESYS s.a. may modify the product specification or change the information given in this documentation at any
time, at its discretion, and without prior notice.
EURESYS s.a. shall not be liable for any loss of or damage to revenues, profits, goodwill, data, information systems or
other special, incidental, indirect, consequential or punitive damages of any kind arising in connection with the use
of the hardware or the software of EURESYS s.a. or resulting of omissions or errors in this documentation.
This documentation is provided with Open eVision 2.7.1 (doc build 1114).
© 2019 EURESYS s.a.
2
Open eVision User Guide
Contents
3
Open eVision User Guide
4
Open eVision User Guide
5
Open eVision User Guide
Open eVision image objects contain image data that represents rectangular images.
Each image object has a data buffer, accessible via a pointer, where pixel values are stored
contiguously, row by row.
An Open eVision image object has a rectangular array of pixels characterized by EBaseROI
parameters .
n Width is the number of columns (pixels) per row of the image.
n Height is the number of rows of the image. (Maximum width / height is 32,767 (215-1) in
Open eVision 32-bit, and 2,147,483,647 (231-1) in Open eVision 64-bit.)
n Size is the width and height.
The Plane parameter contains the number of color components. Gray-level images = 1. Color
images = 3.
Classes
Image and ROI classes derive from abstract class EBaseROI and inherit all its properties.
6
Open eVision User Guide
Depth maps
A depth map is way to represent a 3D object using a 2D grayscale image, each pixel in the
image representing a 3D point.
The pixel coordinates are the representation of the X and Y coordinates of the point while the
grayscale value of the pixel is a representation of the Z coordinate of the point.
Point clouds
7
Open eVision User Guide
3D point clouds are produced by various 3D scanning techniques, such as Laser Triangulation,
Time of Flight or Structured Lighting.
Several image types are supported according to their pixel types: black and white, gray levels,
color, etc.
Easy.GetBestMatchingImageType returns the best matching image type for a given file on disk.
8
Open eVision User Guide
Depth Maps
8 and 16-bit depth map values are stored in buffers compatible with the 2D Open eVision
images.
Point Clouds
9
Open eVision User Guide
Type Description
Data compression standard issued by the Joint Photographic Expert Group
registered as ISO/IEC 15444-1 and ISO/IEC 15444-2. Open eVision supports only
JPEG- lossy compression format, file format and code stream variants.
2000 - code stream describes the image samples.
- file format includes meta-information such as image resolution and color
space.
PNG Lossless data compression method (Portable Network Graphics).
Euresys proprietary image file format obtained from the serialization of Open
Serialized
eVision image objects.
Tag Image File Format is currently controlled by Adobe Systems and uses the
LibTIFF third-party library to process images written for 5.0 or 6.0 TIFF
specification.
TIFF
File save operations are lossless and use CCITT 1D compression for 1-bit binary
pixel types and LZW compression for all others.
File load operations support all TIFF variants listed in the LibTIFF specification.
For a 8- and 16-bit depth maps, the AsImage() method returns a compatible image object
(respectively EImageBW8 and EImageBW16) that can be used with Open eVision’s 2D processing
features.
Pixel access
The recommended method to access pixels is to use SetImagePtr and GetImagePtr to embed
the image buffer access in your own code. See also Image Construction and Memory Allocation
and Retrieving Pixel Values.
Use of the following methods should be limited because of the overhead incurred by each
function call:
Direct access
EROIBW8::GetPixel and SetPixel methods are implemented in all image and ROI classes to
read and write a pixel value at given coordinates. To scan all pixels of an image, you could run
a double loop on the X and Y coordinates and use GetPixel or SetPixel each iteration, but
this is not recommended.
10
Open eVision User Guide
For performance reasons, these accessors should not be used when a significant number of pixel
needs to be processed. When that is the case, retrieving the internal buffer pointer using
GetBufferPtr() and iterating on the pointer is recommended.
Quick Access to BW8 Pixels
Supported structures
n EBW1, EBW8, EBW32
n EC15 (*), EC16 (*), EC24 (*)
n EC24A
n EDepth8, EDepth16, EDepth32f,
(*) These formats support RGB15 (5-5-5 bit packing), RGB16 (5-6-5 bit packing) and RGB32 (RGB +
alpha channel) but they must be converted to/from EC24 using EasyImage::Convert before
any processing.
Note: Transition with versions prior to eVision 6.5 should be seamless: image pixel types were
defined using typedef of integral types, pixel values were treated as unsigned numbers and
implicit conversion to/from previous types is provided.
11
Open eVision User Guide
12
Open eVision User Guide
The Save method of an image or the SaveImage method of a depth map or a ZMap saves the
image data of an image or of a depth map or a ZMap object into a file using two arguments:
n Path: path, filename, and file name extension.
n Image File Type. If omitted, the file name extension is used.
Images bigger than 65,536 (either width or height) must be saved in Open eVision proprietary
format.
Save throws an exception when:
n The requested image file format is incompatible with the image pixel types
n The Auto file type selection method and the file name extension is not supported
When saving a 16-bit depth map, the fixed point precision is lost and the pixels are considered as 16-
bit integers.
13
Open eVision User Guide
SaveJpeg and SaveJpeg2K specify the compression quality when saving compressed images.
They have two arguments:
n Path: a string of characters including the path, filename, and file name extension.
n Compression quality of the image file, an integer value in range [0: 100].
SaveJpeg saves image data using JPEG File Interchange Format – JFIF.
SaveJpeg2K saves image data using JPEG 2000 File format.
JPEG compression values
14
Open eVision User Guide
Point Clouds
● Use the Save method to save the point cloud in Open eVision proprietary file format.
● Use the SavePCD method to save the point cloud in a ASCII or a binary file compatible with
other software such as PCL (Point Cloud Library).
The PCD format is supported in ASCII and binary modes.
● Use the Load method to load image data into an image object:
□ It has one argument: the path: path, filename, and file name extension.
□ File type is determined by the file format.
□ The destination image is automatically resized according to the size of the image on disk.
● The Load method throws an exception when:
□ File type identification fails
□ File type is incompatible with pixel type of the image object
Serialized image files of Open eVision 1.1 and newer are incompatible with serialized image files of
previous Open eVision versions.
When loading a BW16 image (with integer values) in a depth map, the fixed point precision set in the
depth map (0 by default) is left unchanged and used.
Point Clouds
● Use the Load method to save the point cloud in Open eVision proprietary file format.
● Use the LoadPCD method to save the point cloud in a ASCII or a binary file compatible with
other software such as PCL (Point Cloud Library).
15
Open eVision User Guide
The image object dynamically allocates and unallocates a buffer. Memory management is
transparent.
When the image size changes, re-allocation occurs.
When an image object is destroyed, the buffer is unallocated.
To declare an image with internal memory allocation:
1. Construct an image object, for instance EImageBW8, either with width and height arguments,
OR using the SetSize function.
2. Access a given pixel. There are several functions that do this. GetImagePtr returns a pointer
to the first byte of the pixel at given coordinates.
The user controls buffer allocation, or links a third-party image in the memory buffer to an Open
eVision image.
Image size and buffer address must be specified.
When an image object is destroyed, the buffer is unaffected.
To declare an image with external memory allocation:
1. Declare an image object, for instance EImageBW8.
2. Create a suitably sized and aligned buffer (see Image Buffer).
3. Set the image size with the SetSize function.
4. Access the buffer with GetImagePtr. See also Retrieving Pixel Values.
1 device-independent bitmap
16
Open eVision User Guide
Memory Layout
17
Open eVision User Guide
● EImageC15 stores each pixel in 2 bytes. Each color component is coded with 5-bits.
The 16th bit is left unused.
Example memory layout of the first pixels of a C15 image buffer:
● EImageC16 stores each pixel in 2 bytes. The first and third color components are coded with
5-bits.
The second color component is coded with 6-bits.
Example memory layout of the first pixels of a C16 image buffer:
18
Open eVision User Guide
● EImageC24A stores each pixel in 4 bytes. Each color component is coded with 8-bits.
The alpha channel is also coded with 8-bits.
Example memory layout of the first pixels of a C24A image buffer:
19
Open eVision User Guide
n Destructive overlaying drawing operations alter the image contents by drawing inside the
image such as Easy::OpenImageGraphicContext. Gray-level [color] images can only
receive a gray-level [color] overlay.
Gray 3D Rendering
3D rendering
20
Open eVision User Guide
Vector types
n EBW8Vector: a sequence of gray-level pixel values, often extracted from an image profile
(used by EasyImage::Lut, EasyImage::SetupEqualize,
EasyImage::ImageToLineSegment, EasyImage::LineSegmentToImage,
EasyImage::ProfileDerivative, ...).
n EBW16Vector: a sequence of gray-level pixel values, using an extended range (16 bits),
mainly for intermediate computations.
21
Open eVision User Guide
n EBW32Vector: a sequence of gray-level pixel values, using an extended range (32 bits),
mainly for intermediate computations
(used in EasyImage::ProjectOnARow, EasyImage::ProjectOnAColumn, ...).
n EC24Vector: a sequence of color pixel values, often extracted from an image profile
(used by EasyImage::ImageToLineSegment, EasyImage::LineSegmentToImage,
EasyImage::ProfileDerivative, ...).
22
Open eVision User Guide
23
Open eVision User Guide
24
Open eVision User Guide
You can save or load an ROI as a separate image, to be used as if it was a full image. The ROIs
perform no memory allocation at all and never duplicate parts of their parent image, the
parent image provides them with access to its image data.
The image size of the new file must match the size of the ROI being loaded into it. The image
around the ROI remains unchanged.
ROI Classes
An Open eVision ROI inherits parameters from the abstract class EBaseROI.
There are several ROI types, according to their pixel type. They have the same characteristics as
the corresponding image types.
n EROIBW1
n EROIBW8
n EROIBW16
n EROIBW32
n EROIC15
n EROIC16
n EROIC24
n EROIC24A
Attachment
An ROI must be attached to a parent (image/ROI) with parameters that set the parent, position
and size, and these links are updated transparently, avoiding dangling pointers.
A normal image cannot be attached to another image or ROI.
Nesting
Set and Get functions change or query the width, height and position of the origin of an ROI,
with respect to its immediate or topmost parent image.
An image may accommodate an arbitrary number of ROIs, which can be nested in a
hierarchical way. Moving the ROI also moves the embedded ROIs accordingly. The image/ROI
classes provide several methods to traverse the hierarchy of ROIs associated with an image.
25
Open eVision User Guide
Nested ROIs: Two sub-ROIs attached to an ROI, itself attached to the parent image
Cropping
CropToImage crops an ROI which is partially out of its image. The resized ROI never grows.
An exception is thrown if a function attempts to use an ROI that has limits that extend outside
of the parents.
Note: (In Open eVision 1.0.1 and earlier, an ROI was silently resized or repositioned when placed
out of its image and sometimes grew. If ROI limits extended outside parents, they were silently
resized to remain within parent limits.)
26
Open eVision User Guide
You define and use regions of interest (ROI) to restrict the area processed with your vision tool
and to reduce and optimize the processing time.
In Open eVision:
□ An ROI (EROIxxx class) designates a rectangular region of interest.
□ A region (ERegion class) designates an arbitrarily shaped ROI. With regions, you can
determine precisely which part of the image, down to a single pixel, is used for your
processing.
Currently, only the following Open eVision methods support ERegions:
Library Method
EasyImage::Threshold
EasyImage::DoubleThreshold
EasyImage::Histogram
EasyImage::Area
EasyImage::AreaDoubleThreshold
EasyImage::BinaryMoments
EasyImage::WeightedMoments
EasyImage::GravityCenter
EasyImage
EasyImage::PixelCount
EasyImage::PixelMax
EasyImage::PixelMin
EasyImage::PixelAverage
EasyImage::PixelStat
EasyImage::PixelVariance
EasyImage::PixelStdDev
EasyImage::PixelCompare
EDepthMapToMeshConverter::Convert
EDepthMapToPointCloudConverter::Convert
Easy3D
EStatistics::ComputePixelStatistics
EStatistics::ComputeStatistics
EasyObject EImageEncoder::Encode
EasyFind EPatternFinder::Find
In the future Open eVision releases, the support of ERegions will be gradually extended to all
operators.
27
Open eVision User Guide
Creating regions
Open eVision offers multiple ways to create regions, depending on the shape you need:
The ERegion is the base class for all regions and the most versatile. It encodes a region using a
Run-Length Encoded (RLE) representation.
□ The RLE representation of a region is made of runs (horizontal, 1-pixel high slices).
□ The runs are stored in the form of their ordinate, starting abscissa and length.
28
Open eVision User Guide
Geometry-based regions
Geometry based regions are specialized classes of regions that are encompassed in simple
geometries. Open eVision currently provides classes based on a rectangle, a circle, an ellipse or
a polygon.
Use these classes to setup geometric regions and modify them with translation, rotation and
scaling. The transformation operators return new regions, leaving the source object unchanged.
● ERectangleRegion
□ The contour of an ERectangleRegion class is a rectangle.
□ Define it using its center, width, height and angle.
□ Alternatively, use an ERectangle instance, such as one returned by an
ERectangleGauge instance.
● ECircleRegion
□ The contour of an ECircleRegion class is a circle.
□ Define it using its center and radius or 3 non-aligned points.
□ Alternatively, use an ECircle instance, such as one returned by an ECircleGauge
instance.
29
Open eVision User Guide
● EEllipseRegion
□ The contour of an EEllipseRegion class is an ellipse.
□ Define it using its center, long and short radius and angle.
● EPolygonRegion
□ The contour of an EPolygonRegion class is a polygon.
□ It is constructed using the list of its vertices.
The ERegion class provides a set of specialized constructors to create regions from the results
of another tool.
In a tool chain, these constructors restrict the processing of a tool to the area issued from the
previous tool.
30
Open eVision User Guide
Combining regions
Use the following operations to create a new region by combining existing regions:
● Union
□ The ERegion::Union(const ERegion&, const ERegion&) method returns the region
that is the addition of the two regions passed as arguments.
Union of 2 circles
● Intersection
□ The ERegion::Intersection(const ERegion&, const ERegion&) method returns
the region that is the intersection of the two regions passed as argument.
Intersection of 2 circles
31
Open eVision User Guide
● Subtraction
□ The ERegion::Substraction(const ERegion&, const ERegion&) method returns
the first region passed as argument after removing the second one.
Subtraction of 2 circles
Using regions
The tools supporting regions provide methods that follow one of these conventions:
□ Method(const EImage& source, const ERegion& region)
□ Method(const EImage& source, const ERegion& region, EImage&
destination)
Note: The source, the region and the destination must be compatible. It means that the region
must at least partly fit in the source, and that source and destination must have the same size.
● Open eVision automatically prepares the regions when it applies them to an image, but this
preparation can take some time.
● If you do not want that your first call to a method takes longer than the next ones, you can
prepare the region in advance by using the appropriate Prepare() method.
● To manually prepare the regions, adapt the internal RLE description to your images.
Drawing regions
32
Open eVision User Guide
● The older EROI classes of Open eVision are compatible with the new regions.
● Some tools allow the usage of regions with source and/or destinations that are ERoi instead
of EImage follow one of these conventions:
□ Method(const ERoi& source, const ERegion& region)
□ Method(const ERoi& source, const ERegion& region, ERoi& destination)
In that case, the coordinates used for the region are relative to the reduced ROI space instead of the
whole image space .
ERegion and 3D
● The new regions are compatible with the 2.5D representations of Easy3D (EDepthMap and
EZMap).
● You can also reduce the domain of processing when using these classes.
Flexible Masks
A flexible mask is a BW8 image with the same height and width as the source image. It contains
shapes of areas that must be processed and ignored areas (that will not be considered during
processing):
n All pixels of the flexible mask having a value of 0 define the ignored areas.
n All pixels of the flexible mask having any other value than 0 define the areas to be processed.
A flexible mask can be generated by any application that outputs BW8 images and by some
EasyObject and EasyImage functions.
33
Open eVision User Guide
Resulting image
A flexible mask can be generated by any application that outputs BW8 images or uses the Open
eVision image processing functions.
EasyObject can use flexible masks to restrict blob analysis to complex or disconnected shaped
regions of the image.
If an object of interest has the same gray level as other regions of the image, you can define
"keep" and "ignore" areas using flexible masks and Encode functions.
A flexible mask is a BW8 image with the same height and width as the source image.
34
Open eVision User Guide
n A pixel value of 0 in the flexible mask masks the corresponding source image pixel so it
doesn't appear in the encoded image.
n Any other pixel value in the flexible mask causes the pixel to be encoded.
Source image
35
Open eVision User Guide
Find four circles (left) Flexible mask can isolate the central chip (right)
36
Open eVision User Guide
2.11. Profile
Profile Sampling
Profile Analysis
37
Open eVision User Guide
38
Open eVision User Guide
39
Open eVision User Guide
Intensity Transformation
These EasyImage functions change the gray-levels of pixels to increase contrast.
Gain offset
Gain Offset changes each pixel to [old gray value * Gain coefficient + Offset].
40
Open eVision User Guide
In this example, the resulting image has better contrast and is brighter than the source image.
Source and result images (with gain = 1.2 and offset = +12)
Color images have three separate gain and offset values, one per color component (red, green,
blue).
Normalization
Normalize makes images of the same scene comparable, even with different lighting.
It compares the average gray level (brightness) and standard deviation (contrast) of the source
image and a reference image. Then, it normalizes the source image with gain and offset
coefficients such that the output image has the same brightness and contrast as the reference
image. This operation assumes that the camera response is reasonably linear and the image
does not saturate.
41
Open eVision User Guide
The reference image (from which the average and standard deviation are computed),
the source image (too bright),
and the normalized image (contrast and brightness are the same as the reference image)
Uniformization
Uniformize compensates for non-uniform illumination and/or camera sensitivity based on one
or two reference images. The reference image should not contain saturated pixel values and
have minimum noise.
n When one reference image is used, the transformation is similar to an adaptive (space-
variant) gain; each pixel in the reference image encodes the gain for the corresponding pixel
in the source image.
n When two reference images are used, the transformation is similar to an adaptive gain and
offset; each pixel in the reference images encodes either the gain or the offset for the
corresponding pixel in the source image.
42
Open eVision User Guide
Lookup tables
Lut uses a lookup table of new pixel values to replace the current ones - efficient for BW8 and
BW16 images. If the transform function never changes, it is best to use a lookup table.
Example of a transform
Thresholding
Thresholding transforms an image by classifying the pixel values using these methods:
n "Automatic thresholding" on the next page (BW8 and BW16 images only)
n "AutoThreshold" on page 45 (BW8 and BW16 images only)
n "Manual thresholding" on the next page using one or two threshold values
n "Histogram based" on the next page (computed before using the thresholding function)
These functions also return the average gray levels of each pixel below and above the threshold.
43
Open eVision User Guide
Automatic thresholding
The threshold is calculated automatically if you use one of these arguments with the
EasyImage::Threshold function.
Min Residue: Minimizes the quadratic difference between the source and resulting image
(default if Threshold function is invoked without a argument).
Max Entropy: Maximizes the entropy (i.e. the amount of information) between object and
background of the resulting image.
Isodata: Calculates a threshold value that is an average of the gray levels: halfway between the
average gray level of pixels below the threshold, and the average gray level of pixels above the
threshold.
Manual thresholding
Manual thresholds require that the user supplies one or two threshold values:
n one value to the Threshold function to classify source image pixels (BW8/BW16/C24) into
two classes and create a bi-level image. This can be:
o relativeThreshold is the percentage of pixels below the threshold. The Threshold
function then computes the appropriate threshold value, or
o absoluteThreshold. This value must be within the range of pixel values in the source
image.
n two values to the DoubleThreshold function to classify source image pixels (BW8/BW16)
into three classes and create a tri-level image.
o LowThreshold is the lower limit of the threshold
o HighThreshold is the upper limit of the threshold
Histogram based
When a histogram of the source image is available, you can speed up the automatic
thresholding operation by computing the threshold value from the histogram (using
HistogramThreshold or HistogramThresholdBW16) and using that value in a manual
thresholding operation.
44
Open eVision User Guide
These functions also return the average gray levels of each pixel below and above the threshold.
AutoThreshold
When no source image histogram is available, AutoThreshold can still calculate a threshold
value using these threshold modes: EThresholdMode_Relative, _MinResidue, _MaxEntropy and _
Isodata.
This function supports flexible masks.
45
Open eVision User Guide
Note: Note: For logical operators, a pixel with value 0 is assumed FALSE, otherwise TRUE. The
result of a logical operation is 0 when FALSE and 255 otherwise.
General
n Compare (abs. value of the difference)
n Saturated sum
n Saturated difference
n Saturated product
n Saturated quotient
n Modulo
n Overflow-free sum
n Overflow-free difference
n Overflow-free product
n Overflow-free quotient
n Bitwise AND
n Bitwise OR
n Bitwise XOR
n Minimum
n Maximum
n Equal
n Not equal
n Greater or equal
n Lesser or equal
n Greater
n Lesser
46
Open eVision User Guide
Copy
n Sheer Copy
Invert
n Invert (negative)
Shift
n Left Shift
n Right Shift
Logical
n Logical AND
n Logical OR
n Logical XOR
Overlay
n Add an overlay
Set
Operators Copy if mask = 0 and Copy if mask <> 0 are very handy to perform masking: the first
image argument serves as a mask that allows or disallows changing the pixel values in the
destination image.
n Copy if mask = 0
n Copy if mask <> 0
Non-Linear Filtering
These functions use non-linear combinations of neighboring pixels to highlight a shape, or to
remove noise.
Most can be destructive (except top-hat and median filters) i.e. the source image is overwritten
by the destination image. Destructive operations are faster.
All have a gray image and a bilevel equivalent, for example ErodeBox and BiLevelErodeBox.
1. They define the required shape by a "Kernel" on the next page (usually in a 3x3 matrix).
2. They slide this Kernel over the image to determine the value of the destination pixel when
a match is found:
n Erosion, Dilation: shrinks / grows image regions.
n Opening, Closing: removes / fills image region boundary pixels.
n Thinning, Thickening: erodes / dilates using image pattern matching.
n Top-Hat filters: retains all the tiny image details while removing everything else.
47
Open eVision User Guide
n Morphological distance: indicates how many erosions are required to make a pixel
black.
n Morphological gradient: indicates the outer and inner edges of the erosion and dilation
processes.
n Median filter: removes impulsive noise.
n Hit-and-Miss transform: detects patterns of foreground /background pixels, can create
skeletons.
Kernel
Rectangular kernel of half width = 3 and half height = 2 (left) Circular kernel of half width =
2 (right)
The morphological operators combine the pixel values in a neighborhood of given shape
(square, rectangular or circular) and replace the central pixel of the neighborhood by the
result.The combining function is non-linear, and in most cases is a rank filter: which considers
the N values in the given neighborhood, sorts them increasingly and selects the K-th largest.
Three special cases are most often used erosion, dilation and median filter where : K can be 1
(minimum of the set), N (maximum) or N/2 (median).
Erosion, Dilation, Opening, Closing, Top-Hat and Morphological Gradient operations all use
rectangular or circular kernels of odd size. Kernel size has an important impact on the result.
examples
Erosion, Dilation
Erosion reduces white objects and enlarges black objects, Dilation does the opposite.
48
Open eVision User Guide
Erosion Dilation
Erosion thins white objects by removing a layer of pixels along the objects edges: ErodeBox,
ErodeDisk. As the kernel size increases, white objects disappear and black ones get fatter.
Dilation thickens white objects by adding a layer of pixels along the objects edges: DilateBox,
DilateDisk. As the kernel size increases, white objects get fatter and black ones disappear.
Opening, Closing
Opening removes tiny white objects / dust. Closing removes tiny black holes / dust.
Opening Closing
Thinning, Thickening
These functions use a 3x3 kernel to grow (Thick) or remove (Thin) pixels:
n Thinning: can help edge detectors by reducing lines to single pixel thickness.
n Thickening: can help determine approximate shape, or skeleton.
When a match is found between the kernel coefficients and the neighborhood of a pixel, the
pixel value is set to 255 if thickening, or 0 if thinning. The kernel coefficients are:
n 0: matching black pixel, value 0
n 1: matching non black pixel, value > 0
n -1: don't care
Top-Hat filters
49
Open eVision User Guide
They take the difference between an image and its opening (or closure). Thus, they keep the
features that an opening (or closing) would erase. The result is a perfectly flat background
where only black or white features smaller than the kernel size appear.
n White top-hat filter enhances thin white features: WhiteTopHatBox ,WhiteTopHatDisk.
n Black top-hat filter enhances thin black features:BlackTopHatBoxBlackTopHatDisk.
Morphological distance
Distance computes the morphological distance (number of erosion passes to set a pixel to
black) of a binary image (0 for black, non 0 for white) and creates a destination image, where
each pixel contains the morphological distance of the corresponding pixel in the source image.
Morphological gradient
The morphological gradient performs edge detection - it removes everything in the image but
the edges.
The morphological gradient is the difference between the dilation and the erosion of the image,
using the same structuring element.
MorphoGradientBox, MorphoGradientDisk.
Median
The Median filter removes impulse noise, whilst preserving edges and image sharpness.
It replaces every pixel by the median (central value) of its neighbors in a 3x3 square kernel, thus,
outer pixels are discarded.
50
Open eVision User Guide
51
Open eVision User Guide
Hit-and-Miss transform
Hit-and-miss transform operates on BW8, BW16 or C24 images or ROIs to detect a particular
pattern of foreground and background pixels in an image.
Hit-and-miss transform
52
Open eVision User Guide
1. Define the kernel by detecting the left corner. The left corner pixel has black pixels on its
immediate left, top and bottom; and it has white pixels on its right. The following hit-and-miss
kernel will detect the left corner:
- +
- + +
- +
2. Apply the filter on the source image. Note that the resulting image should be properly sized.
3. Locate the three remaining corners in the same way: Declare three kernels that are the
rotation of the filter above and apply them.
4. Detect the right, top and bottom corners.
Geometric Transforms
Geometric transformation moves selected pixels in an image, which is useful if a shape in an
image is too large / small / distorted, to make point-to-point comparisons possible.
The selected area may be any shape, but the resulting image is always rectangular. Pixels in the
destination image that have corresponding pixels that are outside of the selected area are
considered not relevant and are left black.
53
Open eVision User Guide
When the source coordinates for a destination pixel are not integer, an interpolation technique
is required.
The nearest neighborhood method is the quickest - it uses the closest source pixel.
The bi-linear interpolation method is more accurate but slower - it uses a weighted average of
the four neighboring source pixels.
Possible geometric transformations are:
ReAlignment
The simplest way to realign two misaligned images is to accurately locate features in both
images (landmarks or pivots, using pattern matching, point measurement or other) and realign
one of the images so that these features are superimposed.
You can register an image by realigning one, two or three pivot points to reference positions. For
best accuracy, the pivot points should be as far apart as possible.
n A single pivot point transform is a simple translation. If interpolation bits are used, sub-pixel
translation is achieved.
n Two pivot points use a combination of translation, rotation and optionally scaling. If scaling
is not allowed, the second pivot point may not be matched exactly. Scaling should not
normally be used unless it corresponds to a change of lens magnification or viewing
distance.
n Three pivot points use a combination of translation, rotation, shear correction and
optionally scaling. A shear effect can arise when acquiring images with a misaligned line-
scan camera.
Mirroring
This destructive feature modifies the source image to create a mirror image:
n horizontally (the columns are swapped) or
n vertically (the rows are swapped).
If the position or size of an object of interest changes, you can measure the change in position
or size and generate a corrected image using the ScaleRotate and Shrink functions.
EasyImage::ScaleRotate performs:
n Image translation: you provide the position coordinates of a pivot-point in the source image
and a corresponding pivot point in the destination image.
n Image scaling: you provide scaling factor values for X- and Y-axis.
n Image rotation: you provide a rotation angle value.
For resampling, the nearest neighbor rule or bilinear interpolation with 4 or 8 bits of accuracy is
used. The size of the destination image is arbitrary.
54
Open eVision User Guide
Shrink
LUT-based unwarping
If the feature of interest is distorted due to its shape (anamorphosized), you can unwarp a
circular ring-wedge shape (such as text on CD labels)into a straight rectangle. A ring-wedge is
delimited by two concentric circles and two straight lines passing through the center.
EasyImage::SetCircleWarp prepares warp images for use with function EasyImage::Warp
which moves each pixel to locations specified in the "warp" images which are used as lookup
tables.
Impulse noise produces a "salt and pepper" effect, while uniform noise blends.
55
Open eVision User Guide
filtering:
n Convolution replaces the value at each pixel by a combination of its neighbors, leading to a
localized averaging. Linear filtering is recommended to reduce uniform noise. Beware that it
tends to blur edges.
n Median filtering replaces each pixel by the median value in the pixel neighborhood (5-th
largest value in a 3x3 neighborhood). This reduces impulse noise and keeps sharpness.
o EasyImage::Median
o EasyImage::BiLevelMedian
Temporal noise reduction is achieved by combining the successive values of individual pixels
across time. EasyImage implements recursive averaging and moving averaging.
EasyImage provides three ways to minimize noise by means of several images:
n Temporal average: just accumulates N images and average them; using standard arithmetic
operations, as illustrated below. Creates de-noised image after N acquisitions using average
values. Noise varies from frame to frame while the signal remains unchanged, so if several
images of the same (still) scene are available, noise can be separated from the signal.
The disadvantage of producing one de-noised image after N acquisitions only, is that fast
display refresh is not possible.
Simple average
56
Open eVision User Guide
n Temporal moving average: accumulates the last N images and updates the de-noised image
each time a new one is acquired, in such a way that the computation time does not depend
on N. The whole process is handled by EMovingAverage. The disadvantage of this method is
that it combines noisy images together.
Moving average
n Temporal recursive average: combines a noisy image with the previously de-noised image
using EasyImage::RecursiveAverage.
Recursive average
Recursive averaging
This is a well known process for noise reduction by temporal integration. The principle is to
continuously update a noise-free image by blending it, using a linear combination, with the
raw, noisy, live image stream. Algorithmically, this amounts to the following:
DSTN=a*Src+(1-a)*DstN-1
where a is a mixture coefficient. The value of this coefficient can be adjusted so that a
prescribed noise reduction ratio is achieved.
This procedure is effective when applied to still images, but generates a trailing effect on
moving objects. The larger the noise reduction ratio, the heavier the trailing effect is. To work
around this, a non-linearity can be introduced in the blending process: small gray-level value
variations between successive images are usually noise, while large variations correspond to
changes in the image.
EasyImage::RecursiveAverage uses this observation and applies stronger noise reduction to
small variations and conversely. This reduces noise in still areas and trailing in moving areas.
For optimal performance, the non-linearity must be pre-computed once for all using function
EasyImage::SetRecursiveAverageLUT.
Note: Before the first call to the EasyImage::RecursiveAverage method, the 16-bit work
image must be cleared (all pixel values set to zero).
To estimate the amount of noise, two or more successive images are required. In the simplest
57
Open eVision User Guide
mode, two noisy images are compared. (Other modes are available: if a noise-free image is
available, it is compared to a noisy one; a noise-free image can also be built by temporal
averaging.) Calculates the root-mean-square amplitude and signal-to-noise ratio.
n EasyImage::RmsNoise computes the root-mean-square amplitude of noise, by comparing a
given image to a reference image. This function supports flexible mask and an input mask
argument. BW8, BW16 and C24 source images are supported.
The reference image can be noiseless (obtained by suppressing the source of noise), or
affected by a noise of the same distribution as the given image.
n EasyImage::SignalNoiseRatio computes the signal to noise ratio, in dB, by comparing a
given image to a reference image. This function supports flexible mask and an input mask
argument. BW8, BW16 and C24 source images are supported.
The reference image can be noiseless (obtained by suppressing the source of noise) or be
affected by a noise of the same distribution as the given image.
Scalar Gradient
EasyImage::GradientScalar computes the (scalar) gradient image derived from a given
source image.
The scalar value derived from the gradient depends on the preset lookup-table image.
The gradient of a grayscale image corresponds to a vector, the components of which are the
partial derivatives of the gray-level signal in the horizontal and vertical direction. A vector can
be characterized by a direction and a length, corresponding to the gradient orientation, and the
gradient magnitude.
This function generates a gradient direction or gradient magnitude map (gray-level image) from
a given gray-level image.
For efficiency, a pre-computed lookup-table is used to define the desired transformation.
This lookup-table is stored as a standard EImageBW8/EImageBW16.
Use EasyImage::ArgumentImage or EasyImage::ModulusImage once before calling
GradientScalar.
Vector Operations
Extracting 1-dimensional data from an image generates linear sets of data that are handled as
vectors. Subsequent operations are fast because of the reduced amount of data. The methods
are either:
58
Open eVision User Guide
Projection
Projects the sum or average of all gray color-level values in a given direction, into various
vector types (levels are added when projecting into an EBW32Vector and averaged when
projecting into an EBW8Vector, EBW16Vector or EC24Vector). These functions support
flexible mask.
n EasyImage::ProjectOnAColumn projects an image horizontally onto a column.
n EasyImage::ProjectOnARow projects an image vertically onto a row.
Profile
Samples a series of pixel values along a given segment, path or contour, then analyze and
modify their Peaks and Transitions to make images clearer:
1. Obtain the profile of a line segment / path / contour.
EasyImage::ImageToLineSegment copies the pixel values along a given line segment
(arbitrarily oriented) to a vector. The line segment must be entirely contained within the
image. The vector length is adjusted automatically. This function supports flexible mask.
EasyImage::ImageToPath copies the corresponding pixel values to the vector. The
function supports flexible mask. A path is a series of pixel coordinates stored in a vector.
EasyImage::Contour follows the contour of an object, and stores its constituent pixels
values inside a profile vector. A contour is a closed or not (connected) path, forming the
boundary of an object.
59
Open eVision User Guide
60
Open eVision User Guide
The EasyImage Canny edge detector operates on a grayscale BW8 image and delivers a black-
and-white BW8 image where pixels have only 2 possible values: 0 and 255. Pixels corresponding
to edges in the source image are set to 255; all others are set to 0. It can adjust the scale
analysis, it doesn't allow sub-pixel interpolation and it delivers a binary image after
thresholding.
The API of the Canny edge detector is a single class, ECannyEdgeDetector, with the following
methods:
n Apply: applies the Canny edge detector on an image/ROI.
n GetHighThreshold: returns the high hysteresis threshold for a pixel to be considered as an
edge.
n GetLowThreshold: returns the low hysteresis threshold for a pixel to be considered as an
edge.
n GetSmoothingScale: returns the scale of the features of interest.
n GetThresholdingMode: returns the mode of the hysteresis thresholding.
n ResetSmoothingScale: prevents the smoothing of the source image by a Gaussian filter.
n SetHighThreshold: sets the high hysteresis threshold for a pixel to be considered as an
edge.
n SetLowThreshold: sets the low hysteresis threshold for a pixel to be considered as an edge.
n SetSmoothingScale: sets the scale of the features of interest.
n SetThresholdingMode: sets the mode of the hysteresis thresholding.
The result image must have the same dimensions as the input image.
61
Open eVision User Guide
62
Open eVision User Guide
Overlay
EasyImage::Overlay overlays an image on the top of a color image, at a given position.
If a color image is provided as the source image, all the pixels of this image are copied to the
destination image, except the ones that equal the reference color. When a C24 image is used as
overlay source image, the color of the overlay in destination image is the same as the one in the
overlay source image, thus allowing multicolored overlays.
If a BW8 image is provided as the source image, all the overlay image pixels are copied to the
destination image, apart from those that are the reference color which are replaced by the
source images.
This function supports flexible mask and an input mask argument. C24, C15 and C16 source
images are supported.
63
Open eVision User Guide
dstImage_Width = srcImage_Width
dstImage_Height = (srcImage_Height + 1 - odd ) / 2
Resulting image
64
Open eVision User Guide
n RmsNoise, SignalNoiseRatio.
n Overlay (no overload with mask argument for BW8 source images).
n ProjectOnAColumn, ProjectOnARow (Vector projection).
n ImageToLineSegment, ImageToPath (Vector profile).
In general:
65
Open eVision User Guide
All image processing operations can use quantized coordinates: discrete values in the [0..255]
interval, which use a byte representation to store images in a frame buffer.
Color system conversion operations can also use simpler unquantized coordinates: continuous
values, often normalized to the [0..1] interval.
Color Image Processing
A color image is a vector field with three components per pixel. All three RGB components
reflected by an object have amplitude proportional to the intensity of the light source. By
considering the ratio of two color components, one obtains an illumination-independent image.
With a clever combination of three pieces of information per pixel, one can extract better
features.
There are 3 ways to process a color image:
n Component extraction: you can extract the most relevant feature from the triple color
information, to reduce the amount of data. For instance, objects may be distinguished by
their hue, a pre-processing step could transform the image to a gray-level image containing
only hue values.
n De-coupled transformation: you can perform operations separately on each color
component. For instance, adding two images together adds the red, green and blue
components and stores the result, component by component, in a resulting color image.
n Coupled transformation: you can combine all three color components to produce three
derived components. For example, converting YIQ to RGB.
Supported color systems
Easycolor supports color systems RGB, XYZ, L*a*b*, L*u*v*, YUV, YIQ, LCH, ISH/LSH, VSH and
YSH.
RGB is the preferred internal representation as it is compatible with 24-bit Windows Bitmaps.
EasyColors Lookup tables provide an array of values that define what output corresponds to a
given input, so an image can be changed by a user-defined transformation.
66
Open eVision User Guide
A color pixel can take 16,777,216 (224) values, a full color LUT with these entries would occupy
50 MB of memory and transforms would be prohibitively time-consuming. Pre-computed LUTs
make color transforms feasible.
To transform a color image, you initialize a color LUT using one of the following functions:
"LUT for Gain/Offset (Color) " on page 70: EasyImage::GainOffset,
"LUT for Color Calibration" on page 71: Calibrate,
"LUT for Color Balance" on page 71: WhiteBalance,
ConvertFromRGB, ConvertToRGB.
This color LUT is then used in a transform operation such as EasyColor::Transform or you
can create a custom transform using EColorLookup which takes unquantized values
(continuous, normalized to [0..1] intervals), and specifies the source and destination color
systems. Some operations use the LUT on-the-fly thus avoid storing the transformed image, for
example to alter the U (of YUV) component while the image is in RGB format.
The optimum combination of accuracy and speed is determined by the choice of IndexBits
and Interpolation - the accuracy of the transformed values roughly corresponds to the
number of index bits.
n Fewer table entries mean smaller storage requirements, but less accuracy.
n No interpolation gives quicker running time, but less accuracy. Interpolation can recover
8 bits of accuracy per component. When the involved transform is linear (such as YUV to
RGB), interpolation always gives exact results, regardless of the number of table entries.
Color coordinates in the classical systems are normally continuous values, often normalized to
the [0..1] interval. Computations on such values, termed unquantized, are simpler.
However, storage of images in a frame buffer imposes a byte representation, corresponding to
discrete values, in the [0..255] interval. Such values are termed quantized.
All image processing operations apply to quantized values, but conversion operations can also
be specified using unquantized coordinates.
67
Open eVision User Guide
Bayer transform
YUV images can be minimized without degrading visual quality using function Format444To422
to convert from 4:4:4 to 4:2:2 format (or you can convert Format 422 To 444).
n 4:4:4 uses 3 bytes of information per pixel.
n 4:2:2 uses 2 bytes of information per pixel.
It stores the even pixels of U and V chroma with the even and odd pixels of Y luma as
follows:
EasyColor can change or extract one plane at a time, or all three together. See Compose,
Decompose, GetComponent, SetComponent.
These operations can use a color LUT to transform on the fly, they could build an RGB image
from lightness, saturation and hue planes.
68
Open eVision User Guide
The trick is to define a regular gamut of 256 colors and each color will be assigned to pixels
with a corresponding gray-level value.
To define pseudo-color shades, you specify a trajectory in the color space of an arbitrary
system. You can then pseudo-color using the drawing functions color palette (see Image and
Vector Drawing) then save and/or transform it like any other color image.
This EasyColor process takes a set of distinct colors and associates each pixel with the closest
color, using a layer index that can then be used in EasyObject with the labeled image segmenter
to improve blob creation.
Bayer Transform
The Bayer pattern is a color image encoding format for capturing color information from a
single sensor.
A color filter with a specific layout is placed in front of the sensor so that some of the pixels
receive red light only, while others receive green or blue only.
An image encoded by the Bayer pattern has the same format as a gray-level image and conveys
three times less information. The true horizontal and vertical resolutions are smaller than those
of a true color image.
69
Open eVision User Guide
Note: The Bayer pattern normally starts with a GB/RG block in the upper left corner. If the
image is cropped, this parity rule can be lost, but parity adjustment is unnecessary when
working on a Open eVision ROI.
The Bayer conversion method EasyColor::BayerToC24 transforms an image captured using
the Bayer pattern and stored as a gray-level image, into a true color image. There are three
ways to reconstruct the missing pixels. The more complex the interpolation, the slower the
conversion. However, it is highly recommended to use interpolation.
n Non-interpolated mode: duplicates the nearest pixel to above and/or to the left of the
current pixel.
n Standard interpolated mode: averages relevant neighboring pixels.
n Improved interpolated mode (recommended): interpolates the unknown component
values. This mode reduces visible artifacts along object edges.
Converted images with no (top), standard (left) and improved interpolation method (right)
70
Open eVision User Guide
n When applied to luma/chroma representations, the gain and offset of the chromatic
components should vary in a similar way.
n When applied to intensity/saturation/hue representation, it makes no sense to apply gain
and offset to the hue component.
Note: The contrast enhancement function can be used to uniformize a given component: setting
the gain to 0 for some component has the same result as setting all pixels to the value of the
offset for this component.
The calibration transform can be based on one, three or four reference colors. In the first case,
calibration is a gain adjustment for the three color components. In the second and third case, a
linear or affine transform is used.
Gamma Pre-Compensation
Many color cameras use a gamma pre-compensation process that deals with the non-linear
response of the display device (such as a TV monitor).
71
Open eVision User Guide
Gamma pre-compensation should be used after processing because using it before would
change the result because of the non-linearity introduced.
The pre-compensation process applies the inverse transform to the signal, so that the image
renders correctly on the display. Three pre-defined gamma values are available, depending on
the video standard at hand:
Many color cameras have a built-in gamma pre-compensation feature that can be turned off. If
this feature cannot be turned off and is not desired, its effect can be canceled by applying the
direct gamma transform. The following pre-defined gamma values are available for this
purpose:
White Balance
A camera may exhibit color imbalance, i.e. the three color channels having mismatched gains,
or the illuminant (the light sources) not being perfectly white. When this occurs, the white areas
appear as an unsaturated color. The white balance correction automatically adjusts three
independent gains so that the components of a white pixel become equal. This means that a
white balance calibration step is required, during which a white surface must be shown to the
camera and the corresponding color component are measured. PixelAverage can be used for
this purpose.
Raw image, and image with white balance and gamma pre-compensation
72
Open eVision User Guide
The average and standard deviation of gray-level values can be computed in a sliding window,
i.e., computed for every position of a rectangular window centered on every pixel. The window
size is arbitrary.
Note: The computing time of these functions does not depend on the window size.
The result of the operation is another image.
The local average, EasyImage::LocalAverage, corresponds to a strong low-pass filtering.
The local standard deviation, EasyImage::LocalDeviation enhances the regions with a high
frequency contents, such as noisy or textured areas.
73
Open eVision User Guide
Histogram Computation
Histogram Analysis
EasyImage::AnalyseHistogram and
EasyImage::AnalyseHistogramBW16 provide statistics and thresholding values:
n Total number of pixels.
n Smallest and largest pixel value (gray-level range).
n Average and standard deviation of the pixel values.
n Value and frequency of the most frequent pixel.
n Value and frequency of the least frequent pixel.
Histogram equalization
EasyImage::Equalize re-maps the gray levels so that the histogram fills in the whole
dynamic range as uniformly as possible.
This may be useful to maximize image contrast, or reveal a lot of image details in dark areas.
74
Open eVision User Guide
EasyImage::SetupEqualize creates a LUT so you can work explicitly with the histogram and
LUT vectors. It can be more efficient to keep the image histogram for other purposes (i.e
statistics) and keep the equalization LUT to apply to other images.
Image focus
Sharp focusing can be achieved if the EasyImage::Focusing quantity is maximum for a given
image. This function must be called multiple times with multiple images with a different focus
for the basis of an "auto-focus" system.
EasyImage::Focusing computes the total gradient energy of the image. You can then use this
gradient as a measure of the focusing of an image.
The gradients of the image show the edges of the structures present in the image, with strong
values if the image is well-focused and weaker values otherwise.
To compute the total gradient energy of the image, Open eVision:
a. Squares the pixel values of the horizontal and vertical gradient images.
b. Averages the squared pixel values over both images.
c. Sums the averages.
d. Takes the square root of the resulting value.
The resulting value is maximum if the image is well-focused.
75
Open eVision User Guide
A badly focused image, with its (absolute-valued) horizontal and vertical gradients.
The gradients show the edges of the structures with weak values. The total gradient energy
for this image is 7.9.
76
Open eVision User Guide
77
Open eVision User Guide
78
Open eVision User Guide
79
Open eVision User Guide
80
Open eVision User Guide
3. In the dialog box, enter a Variable name for the variable that is automatically created and
that will contain the result of the processing.
4. Click OK.
The selected tool dialog box opens.
81
Open eVision User Guide
□ Click on the Open an Image button and select one or several (using SHIFT and CTRL)
images on your computer.
□ Or select one of the images (or one of the ROIs, if any) already open in the drop-down list.
Note: You can select only images with an appropriate file format (JPG, PNG, TIFF or BMP) and in
8- and/or 24-bit depending on the library.
2. If you selected several images, activate one with the Load Previous or Load Next
buttons.
The tool is automatically applied on any loaded image and, at this stage, the result is displayed
based on the tool default settings.
The next step is "Step 3: Managing ROIs" below.
Creating a ROI
82
Open eVision User Guide
6. Drag the ROI corner and side handles to move it to the required position.
7. Click on the Close button to close the ROI Management window .
The next step is "Step 4: Configuring the Tool" on the next page.
Managing ROIs
83
Open eVision User Guide
If the Draw Rois box is checked, all ROIs are displayed on the image with a different color.
84
Open eVision User Guide
85
Open eVision User Guide
The execution time is the actual time that the processing took as measured on your computer. It
depends your computer processor, memory, operating system... and, of course, on the processor
load at the time of execution. Thus this execution time slightly varies from execution to execution.
5. To get a more representative execution time, click on the Read , Detect, Results or Execute
button several times and calculate the mean execution time.
6. If your application requires that you reduce the execution time, try:
□ To change the tool parameters,
□ To add one or several ROIs on your image,
□ To enhance your image.
The next step is "Step 6: Using the Generated Code" on the next page.
86
Open eVision User Guide
Once your tool results suit you, you can save or copy this generated code to use it in your own
application.
87
Open eVision User Guide
Of course, the best situation is to set up your image acquisition system to have good and easy
to process images so the Open eVision tools run smoothly and efficiently.
If this is not possible or easy to achieve, you can pre-process your images or your ROIs to
enhance and prepare them for the Open eVision tool you want to run.
Using the various available functions, you can adjust the gain and offset of your image, apply a
convolution, threshold, scale, rotate and white balance your image, enhance contours... using
EasyImage and EasyColor functions.
Pre-processing images
The difference between pre-processing an image and running tools is that the pre-processing
generates a new image while the tools mainly extract and retrieve information from the image
without changing it.
To pre-process an image or an ROI:
1. In the main menu bar, click on the library you want to use (EasyImage or EasyColor).
2. Click on the function you want to use.
Most function dialog boxes are similar to the one illustrated below with 2 image selection areas
and a parameter setting area.
3. If there are multiple versions for your selected function, open the corresponding tab.
4. In the Source Image area, open the source image (as described in "Step 2: Opening an
Image" on page 82).
5. In the Destination Image area, open or create a new destination image.
6. Set your parameters.
7. Click on the Execute button.
The pre-processed image is available in the destination image as illustrated below.
88
Open eVision User Guide
8. If you want to use the destination image outside of Open eVision Studio, save it as described
below.
Saving an image
89
Open eVision User Guide
5. Tutorials
5.1. EasyImage
Converting a Gray-Level Image into a Binary Image
"Thresholding" on page 112
"Single Thresholding" on page 112 - "Double Thresholding" on page 112 - "Histogram-Based
Single Thresholding" on page 113 - "Histogram-Based Double Thresholding" on page 113
Objective
Following this tutorial, you will learn how to use EasyImage to convert a gray-level source
image into a binary destination image. Thresholding an image transforms all the gray pixels
into black or white pixels, depending on whether they are below or above a specified threshold.
Thresholding an image makes further analysis easier.
You'll need first to load an image (step 1). Then you'll set the thresholding parameters (step 2),
and perform the conversion (step 3).
90
Open eVision User Guide
1. In the right area of the Threshold dialog box, move the slider to change the threshold, and
see directly in the source image a preview of the result.
2. Select the Minimum residue option to set a pre-defined algorithm that finds automatically
the right threshold.
1. Click the New icon in the Destination Image area to create a new destination image.
2. Keep the default settings for the new Image object, and click OK.
3. In the Threshold dialog box, click Execute to perform the thresholding in the destination
image.
Objective
Following this tutorial, you will learn how to use EasyImage to trace an object outline in a gray-
level image. The contour extraction allows you to get in a path vector all the points that
constitute an object contour, just by clicking an edge of this object.
You'll need first to load an image (step 1) and set a vector that will contain all the contour
points (step 2). Then you'll click an object edge, and the contour will be extracted automatically
(step 3).
91
Open eVision User Guide
92
Open eVision User Guide
Objective
Following this tutorial, you will learn how to use EasyImage to transform a gray-level image to
a binary image, keeping only the edges detected in the image. The conversion uses the Canny
edge detector algorithm.
You'll need to load a source image (step 1), and simply apply the Canny edge transformation
(step 2).
Source image (left) and destination image after Canny edge transformation (right)
1. From the main menu, click EasyImage , then Canny Edge Detector .
2. Keep the default variable name, and click OK.
3. Click the Open icon of the Source Image area, and load the image file
EasyImage\Key1.tif.
4. Keep the default variable name, and click OK.
1. Click the New icon in the Destination Image area to create a new destination image.
2. Keep the default settings, and click OK.
3. In the canny edge detector dialog box, click Apply to perform the operation in the
destination image.
93
Open eVision User Guide
Objective
Following this tutorial, you will learn how to use EasyImage to detect the corners of an object.
The detection uses the Harris corner detector algorithm.
You'll need to load a source image (step 1), and simply apply the Harris corner detection (step
2).
1. From the main menu, click EasyImage , then Harris Corner Detector .
2. Keep the default variable name, and click OK.
3. Click the Open icon of the Source Image area, and load the image file
EasyGauge\Bracket1.tif.
4. Keep the default variable name, and click OK.
1. In the Harris corner detector dialog box, enter 2.3 for the Scale property.
2. Click Apply to perform corners detection.
3. Click Results to display the coordinates of all detected corners.
4. The Columns button allows you to display additional properties in the results list.
94
Open eVision User Guide
Objective
Following this tutorial, you will learn how to use EasyImage to detect defects
(horizontal/vertical line) in a gray-level image.
You'll need first to load a source image (step 1), set a vector (step 2), and then detect the defect
(horizontal line) (step 3).
1. In the Image Projection dialog box, select the column button, and click Execute to perform
the operation.
2. The resulting vector and the corresponding plot are displayed in the destination vector
window. The graphical result also appears on the image. Each vector value is the sum of all
pixels values across the corresponding horizontal row (or vertical column). By this mean,
horizontal (or vertical) defects are easily detected.
95
Open eVision User Guide
Objective
Following this tutorial, you will learn how to create a flexible mask from a source image, to
restrict a future processing to an arbitrary-shaped do-care area.
Flexible masks can be created in any ways to build a bi-level image. Here, we will first load the
source image (step 1), and then successively invert it, and threshold it (steps 2-3). The resulting
image —the flexible mask— will be saved as a new image file (step 4). This new image file is a
bi-level image. However, there are still black areas that need to be erased, before using the
image as a flexible mask. You can use a third-party software, such as Paint, to clear the
unwanted areas.
1. From the main menu, click EasyImage , then Arithmetic & logic.
2. Click the Open icon of the Source Image 0 area, and load the image file
EasyImage\Key1.tif.
3. Keep the default variable name for the new image object, and click OK.
96
Open eVision User Guide
Objective
Following this tutorial, you will learn how to compute gray-level statistics on an arbitrary-
shaped area only.
You'll need first to load a source image (step 1), and a flexible mask image (step 2). The mask
image must be applied on the source image (step 3), to separate do-care areas (that must be
considered) and don't-care areas (that should not be considered). Finally, the gray-level
statistics are computed on the do-care area only (step 4).
1. From the main menu, click EasyImage , then Image Statistics, Gray Scale .
2. Click the Open icon of the Source Image area, and load the image file EasyImage\Key1.tif.
3. Keep the default variable name for the new image object, and click OK.
97
Open eVision User Guide
3. Keep the default variable name for the new image, and click OK.
1. In the Mask area of the Gray Scale Image Pixels Statistics dialog box, select the mask image
from the drop-down list.
2. The source image preview in the dialog box shows (in red diagonal lines) the don't-care area,
that is the area that will be not be considered when computing the gray-level statistics.
Objective
Following this tutorial, you will learn how to use EasyImage to detect top corners in an image,
using the hit-and-miss transform.
You'll need to load a source image (step 1), set the kernel that represents a top corner (step 2),
and then set a destination image and simply execute the hit-and-miss transform (step 3).
Source image (left) and top corner detected in the source image (white dot)
1. From the main menu, click EasyImage , then Hit And Miss.
2. Click the Open icon of the Source Image area, and load the image file
EasyImage\Diamond.bmp.
3. Keep the default variable name, and click OK.
98
Open eVision User Guide
● In the Hit And Miss dialog box, set the kernel according to the following values:
Objective
Following this tutorial, you will learn how to use EasyImage to detect scratches.
You'll need first to load an image (step 1), set a destination vector, and detect the scratches
(step 2).
99
Open eVision User Guide
Objective
Following this tutorial, you will learn how to use EasyImage to enhance an X-ray image.
You'll need first to load an image (step 1), then define convolution parameters to enhance the
image (step 2).
Source image (left) and enhanced image, after predefined and user-defined convolutions
(right)
100
Open eVision User Guide
1. From the Predefined kernels drop-down list, select Highpass2 , and click Execute to perform
the operation.
The image is no longer blurred but the result is bad because the filter has revealed the noise of
the source image. We need to create a new convolution kernel that will apply a softer high-pass
filtering.
2. Click the New icon next to the User defined kernels drop-down list.
3. Keep the default dimension (3x3) and variable name, and click OK.
4. Enter the following kernel data from left to right and top to bottom: -1, -1, -1; -1, 15, -1; -1, -
1, -1, and click Apply.
5. Click Execute in the Convolution dialog box to perform the operation. The image is much
clearer now.
Objective
Following this tutorial, you will learn how to use EasyImage to correct non-uniform illumination
in an image.
You'll need first to load an image (step 1), load a light reference image (step 2), and perform the
correction (step 3).
101
Open eVision User Guide
Source image, with non-uniform illumination (left) and corrected image (right)
1. Click the Open icon of the Light Reference area, and load the image file
EasyImage\Board (light reference).tif.
To obtain the light reference image, we used a white screen illuminated in the same condition
as the board (original image).
2. Keep the default variable name for the new image object, and click OK.
1. Click the New icon in the Destination Image area to create a new destination image.
2. Keep the default values and click OK.
3. Click Execute to perform the operation.
4. In both source and destination images, right-click and select 3D Rendering .
5. In the new 3D windows, click and drag the mouse to rotate the view. Compare the profiles.
Objective
Following this tutorial, you will learn how to use EasyImage to correct a shear effect in an
image. The following image is taken by a line-scan camera. The camera sensor was misaligned,
resulting in a so-called shear effect.
You'll need first to load an image (step 1), create a destination image (step 2), and then set
pivots parameters to perform the correction (step 3).
102
Open eVision User Guide
Source image, with a shear effect (left) and corrected image (right)
1. In the source image, using the mouse, drag each pivot to the center of the fiducial marks (the
dots around the U18 area).
Notice that the source pivots coordinates, in the Register dialog box, have changed accordingly.
2. To correct the image, enter the following destination pivots coordinates:
□ X0: 170
□ Y0: 495
□ X1: 470
□ Y1: 495
□ X2: 170
□ Y2: 144
3. Click Execute to perform the operation.
103
Open eVision User Guide
Objective
Following this tutorial, you will learn how to use EasyImage to correct skew effect in an image.
You'll need first to load an image (step 1), create a destination image (step 2), and then set a
correction angle to perform the correction (step 3).
Source image, with a skew effect (left) and corrected image (right)
1. From the main menu, click EasyImage , then Scale and Rotate .
2. Click the Open icon of the Source Image area, and load the image file
EasyImage\CCD.tif.
3. Keep the default variable name for the new image object, and click OK.
1. Select the Rotate option, and enter -16.17 in the Angle (Deg) field. (To measure this rotation
angle, refer to Measuring the rotation angle of an object.)
2. From the Interpolation bits drop-down list, select 8 bits to get a better result.
3. Click Execute to perform the operation.
5.2. EasyColor
Performing Thresholding on Color Images
"Color Components" on page 122
104
Open eVision User Guide
Objective
Following this tutorial, you will learn how to use EasyColor to segment a color source image, by
setting a threshold value for each color component of the current color system. For example, to
retrieve the solder pads on a PCB, you'll perform a color segmentation based on the golden
pixels (H), with a loose discrimination on the brightness (L) and saturation (S), to eliminate
surface and lighting effects.
You'll need first to load a color source image, create a destination image, and a color lookup
table (steps 1-3). Then, you'll set the color system and tune each component tolerance to get a
satisfying segmentation of the solder pads (step 4).
Source image
Thresholded image
105
Open eVision User Guide
Objective
Following this tutorial, you will learn how to use EasyImage to perform color segmentation.
You'll need first to load an image (step 1), create a color look-up table (step 2), and perform the
segmentation (step 3).
Source image
Segmented image
106
Open eVision User Guide
107
Open eVision User Guide
6. Code Snippets
108
Open eVision User Guide
// Images constructor
EImageBW8 srcImage;
EImageBW8 dstImage;
// ...
// Images constructor
EImageBW8 srcImage;
// ...
109
Open eVision User Guide
EImageBW8 img;
OEV_UINT8* pixelPtr;
OEV_UINT8* rowPtr;
OEV_UINT8 pixelValue;
OEV_UINT32 rowPitch;
OEV_UINT32 x, y;
*pixelPtr = pixelValue;
pixelPtr++;
}
rowPtr += rowPitch;
}
ROI Placement
///////////////////////////////////////////////////////////////
// This code snippet shows how to attach an ROI to an image //
// and set its placement. //
///////////////////////////////////////////////////////////////
// Image constructor
EImageBW8 parentImage;
// ROI constructor
EROIBW8 myROI;
// ...
Vector Management
///////////////////////////////////////////////////////////////
// This code snippet shows how to create a vector, fill it //
// and retrieve the value of a given element. //
///////////////////////////////////////////////////////////////
110
Open eVision User Guide
// EBW8Vector constructor
EBW8Vector ramp;
Exception Management
////////////////////////////////////////////
// This code snippet shows how to manage //
// Open eVision exceptions. //
////////////////////////////////////////////
try
{
// Image constructor
EImageC24 srcImage;
// ...
catch(Euresys::Open_eVision_1_1::EException exc)
{
// Retrieve the exception description
std::string error = exc.What();
}
111
Open eVision User Guide
6.2. EasyImage
Thresholding
Single Thresholding
////////////////////////////////////////////////////////////////
// This code snippet shows how to perform minimum residue //
// thresholding, absolute thresholding and relative //
// thresholding operations. //
////////////////////////////////////////////////////////////////
// Images constructor
EImageBW8 srcImage;
EImageBW8 dstImage;
// ...
Double Thresholding
////////////////////////////////////////////////////////////////
// This code snippet shows how to perform a thresholding //
// operation based on low and high threshold values. //
////////////////////////////////////////////////////////////////
// Images constructor
EImageBW8 srcImage;
EImageBW8 dstImage;
// ...
112
Open eVision User Guide
// Images constructor
EImageBW8 srcImage;
EImageBW8 dstImage;
// Histogram constructor
EBWHistogramVector histo;
// Variables
unsigned int thresholdValue;
float avgBelowThr, avgAboveThr;
// ...
// Compute the single threshold (and the average pixel values below and above the
threshold)
thresholdValue= EThresholdMode_MinResidue;
EasyImage::HistogramThreshold(&histo, thresholdValue, avgBelowThr, avgAboveThr);
// Images constructor
EImageBW8 srcImage;
EImageBW8 dstImage;
// Histogram constructor
EBWHistogramVector histo;
// Variables
EBW8 lowThr;
EBW8 highThr;
float avgBelowThr, avgBetweenThr, avgAboveThr;
// ...
113
Open eVision User Guide
// Images constructor
EImageBW8 srcGray0, srcGray1, dstGray;
EImageC24 srcColor, dstColor;
// ...
// Erase (blacken) the destination image where the source image is black
EasyImage::Oper(EArithmeticLogicOperation_SetZero, &srcGray0, (EBW8)0, &dstGray);
Convolution
Pre-Defined Kernel Filtering
///////////////////////////////////////////////////////////
// This code snippet shows how to apply miscellaneous //
// convolution operations based on pre-defined kernels. //
///////////////////////////////////////////////////////////
// Images constructor
EImageBW8 srcImage;
EImageBW8 dstImage;
// ...
114
Open eVision User Guide
// Images constructor
EImageBW8 srcImage;
EImageBW8 dstImage;
// ...
Non-Linear Filtering
Morphological Filtering
/////////////////////////////////////////////////////////
// This code snippet shows how to apply miscellaneous //
// morphological filtering operations. //
/////////////////////////////////////////////////////////
// Images constructor
EImageBW8 srcImage;
EImageBW8 dstImage;
// ...
115
Open eVision User Guide
Hit-and-Miss Transform
//////////////////////////////////////////////////////////////
// This code snippet shows how to highlight the left corner //
// of a rhombus by means of a Hit-and-Miss operation. //
//////////////////////////////////////////////////////////////
// Images constructor
EImageBW8 srcImage;
EImageBW8 dstImage;
// ...
leftCorner.SetValue(-1, 0, EHitAndMissValue_Background);
Vector Operations
Path Sampling
//////////////////////////////////////////////////////////////
// This code snippet shows how to retrieve and store the //
// pixel values along a given path together with the //
// corresponding pixel coordinates. //
//////////////////////////////////////////////////////////////
// Image constructor
EImageBW8 srcImage;
// ...
116
Open eVision User Guide
// Vector constructor
EBW8PathVector path;
// Path definition
path.Empty();
for (int i = 0; i < 100; i++)
{
EBW8Path p;
p.X = i;
p.Y = i;
p.Pixel = 128;
path.AddElement(p);
}
Profile Sampling
//////////////////////////////////////////////////////////////
// This code snippet shows how to set, retrieve and store //
// the pixel values along a given line segment. //
//////////////////////////////////////////////////////////////
// Image constructor
EImageBW8 srcImage;
// ...
// Vector constructor
EBW8Vector profile;
Statistics
Image Statistics
////////////////////////////////////////////////////////////////////
// This code snippet shows how to compute basic image statistics. //
////////////////////////////////////////////////////////////////////
// Image constructor
EImageBW8 srcImage;
// ...
117
Open eVision User Guide
// Images constructor
EImageBW8 srcImage;
EImageBW8 dstImage0, dstImage1;
// ...
Histogram-Based Statistics
/////////////////////////////////////////////////////////
// This code snippet shows how to compute statistics //
// based on an histogram. //
/////////////////////////////////////////////////////////
// Image constructor
EImageBW8 srcImage;
// ...
// Histogram constructor
EBWHistogramVector histo;
118
Open eVision User Guide
///////////////////////////////////////////////////
// This code snippet shows how to perform noise //
// reduction by temporal averaging. //
///////////////////////////////////////////////////
// Images constructor
EImageBW16 noisyImage, cleanImage;
// ...
// Accumulation loop
int n;
for (n=0; n < 10; n++)
{
// Acquire a new image into noisyImage
// ...
Recursive Average
///////////////////////////////////////////////////
// This code snippet shows how to perform noise //
// reduction by recursive averaging. //
///////////////////////////////////////////////////
// Images constructor
EImageBW8 noisyImage, cleanImage;
// ...
119
Open eVision User Guide
//////////////////////////////////////////////////////////////////
// This code snippet shows how to retrieve corners' coordinates //
// by means of the Harris corner detector algorithm. //
//////////////////////////////////////////////////////////////////
// Image constructor
EImageBW8 srcImage;
// ...
/////////////////////////////////////////////////////
// This code snippet shows how to highlight edges //
// by means of the Canny edge detector algorithm. //
/////////////////////////////////////////////////////
// Images constructor
EImageBW8 srcImage;
EImageBW8 dstImage;
// ...
120
Open eVision User Guide
/////////////////////////////////////////////////////////
// This code snippet shows how to compute statistics //
// inside a region defined by a flexible mask. //
/////////////////////////////////////////////////////////
// Images constructor
EImageBW8 srcImage;
EImageBW8 mask;
// ...
121
Open eVision User Guide
6.3. EasyColor
Colorimetric Systems Conversion
//////////////////////////////////////////////////////////
// This code snippet shows how to convert a color image //
// from the RGB to the Lab colorimetric system. //
//////////////////////////////////////////////////////////
// Images constructor
EImageC24 srcImage;
EImageC24 dstImage;
// ...
Color Components
//////////////////////////////////////////////////////////
// This code snippet shows how to create a color image //
// from 3 grayscale images and extract the luminance //
// component from a color image. //
//////////////////////////////////////////////////////////
// Images constructor
EImageBW8 red, green, blue;
EImageC24 colorImage;
EImageBW8 luminance;
// ...
122
Open eVision User Guide
White Balance
/////////////////////////////////////////////////////////////
// This code snippet shows how to perform white balancing. //
/////////////////////////////////////////////////////////////
// Images constructor
EImageC24 srcImage, dstImage;
EImageC24 whiteRef;
// ...
Pseudo-Coloring
/////////////////////////////////////////////////////////////
// This code snippet shows how to perform pseudo-coloring. //
/////////////////////////////////////////////////////////////
// Images constructor
EImageBW8 srcImage;
EImageC24 dstImage;
// ...
123
Open eVision User Guide
// Images constructor
EImageBW8 bayerImage;
EImageC24 dstImage;
// ...
124