0 ratings 0% found this document useful (0 votes) 71 views 312 pages Algorithsm in C
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content,
claim it here .
Available Formats
Download as PDF or read online on Scribd
Go to previous items Go to next items
Save algorithsm_in_c For Later The Pocket Handbook
of Image
Processing Algorithms
in C
Harley R. Myler
Arthur R. Weeks
Department
of Electrical & Computer Engineering
University of Central Florida
Orlando, Florida
a
Prentice Hall P T Ft
Englewood Cliffs, New Jersey 07632Latrary of Congress Catatoging-in-Publicwlion Dawa
Myler, Harley R.,
‘The pocket hand book of image processing algorithons in C/Hailey R.
Afyler, Arthur R. Weeks.
93-779]
cr
Edivorial prod
Cover desiga:
Buyer: Mary © sth
Acquisitions
Englewood Cliffs, New Jersey 07632
GIF (Graphie Interchange Forman is copynghied/trade taarked by Conmpuservs,
Inc... TIFF (Tagged interchange File Formal} was developed by Microsait Car-
avon and Aldus Corporation, PCK is a wademark of Z-Soft Corporation,
Risrosoft and MS-DOS are trademarks of Microsof Comporalion, MacPaunt and
Macintosh are uademarks of Apple Computer.
Alll rights reserved. No part of this book may be repreduced, an any form or by
any means without permussion in writing from the publisher.
Printed in Ux United States af America
log k76
ISBN O-13-b4e240-3
i)
8043642240
Prentice: Hall Iniematignal ALK Lina dead”
Prentice-Hall of Australie Ply. Limited, Spiny
Prenuce-Hall Canada Inc, Foronia
Prenvce- Hall Hispancamencana, $.A., Meneo
Prentice-Hall of India Private Limited, Mew Dethe
Prentice-Hall of Japan, Inc., Tokyo
Simon & Schuster Asia Pte. Lid., Sagapore
Editora Prentice-Hall do Brasil, Lida. Rio de JanesroTe Nancy, Krifka and Logan
Te Dian, Anne, Frank, Anne and RichardTable of Contents
Preface
How to Use This Book
Topical Layout
Finding Topics
Algorithm Programming
Class Groupings Index
Adaptive DW-MTM Fier
» Adaptive Filters .
Adaptive MMSE Filter
Alpha-Trimmed Mean Filter
Area
Arithmetic Mean Filter
Brightness Correction
€.LE. Chromaticity Diagram
Centroid
Chain Code
Circularity
Circularly Symmetric Filter
Closing (Binary) Filter
Closing (Graylevel) Filter
Clustering
+ Coding and Compression
* Color Image Processing
Color Saturation Correction
Color Tint Correction
Compactness
Contta-Harmonic Mean Filter
Contrast Correction
Dilation (Binary) Filter
sre
13
16
RRES
29
32
35
37
42
4)
4g
49
SI
SSERVi Table of Contents
Dilation (Graylevel) Filter
Discrete Convolution
Discrete Cotrelation
Discrete Cosine Transform
Discrete Fourier Transform
Dithering
Erosion (Binary) Filter
Erosion (Graylevel} Filter
Rip
Fourier Transform Properties
Gamma Noise
Gaussian Filters
Gaussian Noise
Geometric Mean Filter
Gradient Masks
Graphic Incerchange Format (GIF)
+ Graphics Algorithms
Graylevel
Graylevel Histogram
HSI Color Model
Hadamard Transform
Harmonic Mean Filter
Hartley Transform
High Pass Spatial Fitters
Histogram Equalization
Histogram Specification
+ Histogram Techniques
Hit-Miss (Binary) Filter
Homomorphic Filter
61
63
65
67
70
75
76
B
80
81
82
84
85
87
89
90
93
94
95
oF
99
102
105
199
110
112
115
116
119Table of Contents vii
Hough Transform 122
Huffman Coding 125
+ fmage Fundamentals 129
Inverse Filter 1x)
Joint Photographic Experts Group (IPEG) 133
Laplacian Filter 133
Least Mean Squares Filter 136
Line Detector 137
Low Pass Spatial Filters 140
MacPaint File Format 141
Mask 144
Maximum Axis 145
Maximum Filter 147
Median Filter 149
+ Mensuration 151
Midpoint Filter 152
Minimum Axis 154
Minimum Filter 155
Moments 157
Morphing 160
+ Morphological Filters 162
Mohi-Graylevel Thresholding 163
Negative Exponential Noise 165
* Noise 167
+ Nonlinear Filters 168
Nonlinear Transformations 169
Opening (Binary) Filter 171
Opening (Graylevel) Fitter 173
Optimum Thresholding 175Vili Table of Contents
Outline (Binary) Filter
PC Paintbrush (PCX)
Perimeter
Pixel
Point Detector
Pseudocvior
Pseudocolor Display
Quantization
RGB Color Model
Range Filter
Rayleigh Noise
Robert's Filter
Rotate
Ron Length Encoding (RLE)
Salt and Pepper Noise
Sampling
Sealing
+ Segmentation
Skeleton (Binary) Fikter
Slant Transform
Sobel Filter
+ Spatial Filters
Spatial Frequency
+ Spatial Frequency Filters
Spatial Masks
+ Storage Formats
Tagged Interchange File Format (TIF)
Thickening (Binary) Filter
Thinning (Binary) Filter
178
180
183
184
185
ar,
1X)
193
192
194
196
198
199
202
205
208
212
218
220
221
222
223
224
225
229
234Table of Contents
Thresholding
Top-Hat Filter
+ Transforms
True-Color Display
Uniform Noise
Walsh Transform
Warping
Weighted Mean Pilzer
Weighted Median Filter
Wiener Filter
Wiener Filter (Parametric)
YIQ Color Model
Yp Mean Filter
Zooming
Appendix A Image.h File
Appendix B Example Program
Appendix C TIF Tags List
Bibliography
Glossary
Subject index
ix
239
al
245
252
254
261
BRR
270
271
279
281
282
297Preface xi
This book is the culmination of an idea that was not ours
uniquely, but one that has probably occurred to anyone who
has spent long hours at the terminal writing Image
Processing programs and routines. The programmer is often
found surrounded by a library of texts, loaded with formulas
and mathematically described algorithms. It is then
necessary to engage in the arduous task of writing computer
code from the mathematical description. In addition, if one
dees not have an adequate background in signal processing,
the difficulty of this task is further compounded.
Programmers often spend time flipping from book to book,
index to index, as they search fer a useful mask or filter
technique. In this book we have tried to bring within a
single, desktop friendly cover--in a simple, easy to use
format—the most popular of the vast ensemble of image
processing algorithms. We chose che C programming
language as our algorithm description formar because of ir's
popularity, portability and widespread use in image
processing systems. Also, the C language is compact and
easy to learn, and a plethora of texts and tutorials exist in the
literature for boch the novice and seasoned programmer.
Appendix B consains the complete code for a simple image
processing program that may be used as a template for
implementing any of the code in this handbook.
It is our hope that all programmers (in the end, are we not all
programmers?) of Image Processing systems will find this
book usefui--both as a reference guide and for the implicit
tutorial nature of our descriptions. Students should find our
simple and direct discussion effective as a supplement to
their classroom text or tutorial guides. Seasoned users will
find the format convenient to keep close 10 the terminal for
quick reference and easy access 19 definitions and
algorithms,
Harley R. Myler
Arthor R. WeeksHow to Use This Book 1
TOPICAL LAYOUT
This book is intended to serve as a teady reference and
imaging system user and developer companion. Algorithms
and definitions covered in this book are arranged
alphabetically and typically occupy one or two pages. Each
algorithm follows a similar format that is illustrated by the
miniaturized sample page graphic io the left, below.
Pizel
CLASS: image Furdanverints
Dascpiprtont:
A puntos Pie Blemeees. 8 8 044 vast panne ln
nine, ‘The omc er af tel ‘eden ie
Supa dency das erp i,
Sebpasnal Sea bt Lined nay S9 Cw mamory of the
me Wieser 4 MUAY Uiikgh Tek rem Below share 4
casings MIE tee ra eae BAY Da
warril por
EXAMPLE:
Each topic is identified by
name in bold print in che
header region of each page
with the page number. The
elass of algorithms to
which the topic belongs is
identified next. There are
fifteen classes, which are:
« Adaptive Filters
+ Coding and Compression
+ Color
+ Graphics
+ Histogrem Operations
» Image Fundamentals
+ Mensuration
+ Motphological Filters
+ Noise
+ Nonlinear Filters
+ Segmentation
« Spatial Filters
+ Spatial Frequency Filters
+ Storage Formats
+ Transforms
PROG RAI EXAMPLE:
fe Bhi x 1? Pantie -/
‘shar TReRWA SIAL LUT
fe Wid eV bd oboed Mi aage «t
anae sasamnant i (7411 TA!
Fe M6 5 The Eyoantag zou pixel
Zips nterntoed br Fleet dibeestien| of
Chea Read reo F4] | SLT
AME ALEO
Gund aarn, Same ihe
Each class is described on a separate page and appears in
alphabetic order with the topics. Classes are distinguished
from topics by a bullet (+) in front of the class name in the
page header.
A description of the algorithm or the definition of the topic
follows the class designation, This is followed by an
example of the algorithm or description discussed. Examples
tmay be graphic illustrations or inages. The algorithm in C is
then given, if appropriate to the topic, followed by suggested
follow-on topics under the SEE ALSO heading.2 How to Use This Book
FINDING FOPICS
Well-known topics, such as threshelding, may be looked up
tapidly by thumbing through she headers at the top of each
page. If the class of an algorithm is known, you may go
directly to the class description, which will list all the topics
that fall under thar class.
The Table of Contents in the frone of che book lists all of the
topics and class descriptions as they appear in alphabetical
order with their page numbers.
The Class Groupings Index, following this discussion, lists
the classes alphabetically and their topics along with page
numbers,
The Subject Index, at the end of the book, lists subject areas
and page mumbers.
ALGORITHM PROGRAMMING
The C programming language algonthms given in this book
have been written in as simple a fashion as possible, with as
litde dependence on sophisticated programming technique as
possible. In the words of Einstein, “make everything as
simple as possible, but not simpler.” It is a telatively easy
maiier for the experienced programmer to make the C
routines presented more efficient and elegant, however, it is
not a simple matter for an inexperienced programmer to read
and understand, i.c., simplify, a program that has been
written in a complex and obtuse way with the goal of
computational efficiency.
Throughout the book an image data structure has been used
that allows flexibility while maintaining clariry. Image
tepresentation within a computer is best expressed as a
rectangular array; however, when dealing with more than
one pixel data type across a set of algorithms, a C data
structure is the best solution for maintaining consistency
throughout the algorithm ensemble. The data structure used
here contains the rectangular size of the image, the cype of
data each pixel in the image represents, and a pointer to the
image data self,How to Use This Book 3
Another consideration in the use of a data structure. are the
differences that exist between hardware platforms. Images,
by nature, are large data objects. The structure that we use
assumes that the user has allocated a data space large enough
to accommodate the image being processed within any
restrictive boundaries that may exist on the hardware being
used.
The example graphic below shows our daca structure and its
relationship to an image.
Struct Image{ Columns
int Rows;
int Cols;
unsigned char *Data;
unsigned char Type;
Ve
The *Dara pointer is a contiguous sequence of bytes whose
type is described by the Type variable. The following four
types of image data pixels are used:
=4 Unsigned Character (BASIC)
Type =2 Integer (UINT)
Type =4 Float (REAL)
Type =8 Complex (CMPLX)
For example, a rectangular image with Rows = 64 and
Columns = 256 and a Type = 0 will require that 16,384 (256
x 64) bytes of storage be allocated for the “Data pointer. If
Type = 2 and if an integer on the host machine requires 4
bytes, then four times the previons amount will have to be.
allocated.
One must also apply special data type modifiers when
programming MS-DOS™ systems. The far and Auge
keywords must precede large data space pointers. Proper use
of these modifiers is discussed further in Appendix B.
A complex image is a special case. Here, each datum
represents a real and imaginary component using a float
data type. If the host computer requires 4 bytes to represent a
float value, then each datum in the *Data buffer will consist
of 8 bytes: the first 4 bytes for the real part of the datum, and4 How to Use This Book
the second 4 bytes for the imaginary part,
Consider the diagram below as representing an image, where
each pixel is represented by a square and the total number of
pixels is unspecified. The pixels have been lettered to
facilitate the discussion, The first row of the image consists
of the set of pixels {A, B, ..., C}, the second row is (D, E, ....
F}, and the last row is {G, H, .... 1]. We can easily generalize
this to sets of columns, as well.
Using our image data structure and assuming pixels of Type
0, the arrangement of data for this image in memory would
be as follows, where each pixel requires one byte of storage:
Data
AE) SPE) FS)
Data will point to a series of bytes that represent the pixel
values in column-row order. If Type = 2 and an integer in
our machine requires two bytes of storage (often this is the
size of an int} then the intemal representation will be:
Data
[A]4feTa)-+-[e[c}o]p jeje)...
So pixel 'A' requires 2 bytes of storage, and so on. If the
image is complex (Type = 8} and a float in the machine
requires 4 bytes, the memory map will be:Hew to Use This Book 5
Data
[ALAA ay ala] 4] Aye] 2 [8] 8 je) 8 7B |B]: +
real imaginary real imaginary
A A B B
A complicity in programming arises in the accessing of the
data using cartesian coordinates. The data structure allows
for easy Tepresentation of a large number of pixel data
formats and conforms readily 10 the idea of memory as a
contiguous sequence of bytes; however, access of the data by
coordinate is tricky. We must convert from the cartesian
coordinate pair into a single value so that we may index into
the Linear image data array.
Consider the diagram below. We can visualize the pixels of
the small 4 x 3 image co the left , with pixel values 4
through EL, in terms of their row-column coordinates. The
programming description could be specified as the array,
Img. Pixel G would then be Img[1][2]. Alternatively, we
can specify an image structure as described earlier, with the
¢ statement struct Image Img. To access pixel G using our
structure, we have to index into the data vector one row and
three columns. The data in either case is stored as a sequence
of bytes. The Data pointer of the image structure will be
initialized to point to the start of his sequence, as shown in
the graphic to the right, after the space has been allocated.
Using the siracture format, the value of pixel G would be:
*(Img.Data + (R * Img.Cols} + C}
Where R is the row number (indexed from zero) and C is the
column, For this example, *(img.Data + 1*4 + 2) will yield6 How to Use This Book
the value, G. There are other ways to formulate access
schemes into linear arrays thar have owo-dimensional data
mapped to them, and some of these are illustrated in the
various program sections of the book. A C program header
file listing for the data structure is given in Appendix A, and
a complete image processing program listing is described in
Appendix B.
The routines presented in this book assume differemt image
pixel data formats and the user must observe some caution
when incorporating the algorithms ia theic own programs.
The most popular pixel size is, of course, the unsigned
character. For this reason, it was chosen as the base pointer
data type in our image structure. Image transforms such as
the Fourier and filters that operate in the spatial frequency
domain use compfex pixels. Other processes use float pixels,
We have indicated the type of pixe] data required by the
algorithm in the description, but in the interest of
compactness and simplicity have put no error detection
mechanisms in the code. It is assumed that the user has
allocated the correct data space for the algorithm called.
To simplify this concept of variable data widths for image
pixels, we have provided an example of a mottiple data
format image /O routine at the end of Appendix B.
Finally, a Glossary has been included with a large range of
image processing terms defined as a convenience to the
teader,Class Groupings Index 7
This index is for reference by class membership. The classes
and page numbers are:
+ Adapiive Filters
+ Coding and Compression
+ Color Image Processing
¢ Graphics Algorithms
+ Histogram Techniques
« Image Fundamentals
+ Mensuration
+ Morphotogical Filters
+ Noise
+¢ Nonlinear Filters
+ Segmentation
+ Spatial Filters
+ Spatial Frequency Filters
» Storage Formats
+ Transforms
Adapuve DW-MTM Filter 13
Adaptive MMSE Filter 1?
Chain Codes 32
Huffman Coding 125
Run Length Encoding 2008 Class Groupings Index
+ Color Image Processing Class
C.LE. Chromaticity Diagrara 29
Color Saturation Correction 49
Color Tint Correction Sl
HSI Coter Mode? a?
Pseudocolor 187
Pseudocolor Display 190
RGB Color Model . 192
True-Color Display 245
YIQ Color Model 264
» Graphics Algorithms Class
Dithering 74
Flip 30
Morphing 160
Rotate 199
Warping 252
Zooming 268
* Histogram Techniques Class
Brightness Correction 27
Contrast Correction 57
Graylevel Histogram 95
Histogram Equalization 110
Histogram Specification 11z
Nonlinear Transformations 169Class Groupings Index 9
+ Image Fundamentals Class
Discrete Convolution 63
Discrete Correlation 65
Graylevel 94
Mask 144
Pixel 184
Quantization. 191
Sampling 204
Scaling 205
Spatial Frequency 221
+ Mensuration Class
Area 23
Centroid 30
Circularity 35
Clustering 44
Compaciness 33
Maximum Axis 145
Minimum Axis 154
Moments 157
Perimeter 183
Closing (Binary) Filter 40
Closing (Graylevel) Filter 42
Dilation (Binary) Filter 59
Dilation (Graylevel) Fiiter 61
Erosion (Binary) Filter 76
Erosion (Graylevel) Filter wr
Hit -Miss (Bioary} Filter 11610 Class Groupings Index
+ Morphological Filters Class (continued)
Opening (Binary) Filter V1
Opening (Graylevei} Fitter 173
Outline (Binary) Filter 178
Skeleton (Binary) Filter 208
Thickening (Binary) Fitter 229
Thinning (Binary) Filter 234
Top-Hat (Graylevei) Filter 241
+ Naise Class
Gamma Noise $2
Gaussian Noise 35
Negative Exponential Noise 163
Rayleigh Noise 196
Salt and Pepper Noise 202
Uniform Noise 246
+ Nontinear Fitters Class
Alpha-Trimmed Mean Filter 20
Contra-Harmonic Filter 54
Seomewic Mean Filter 87
Hamneonic Mean Filter 102
Maximum Filter 147
Median Filter 149
Midpoint Filter 152
Minimum Filter 155
Range Filter 194
Weighted Median Filter 257
Yp Mean Filter 265Class Groupings Index 11
« Segmentation Class
Line Detector 137
Multi-Graylevel Thresholding 163
Optimum Thresholding 175
Poim Detector 185
Thresholding 239
+ Spatial Filters Class
Arithmetic Mean Filter 25
Gaussian Filters 84
Gradient Mask 89
High Pass Spatial Filters 109
Laplacian Filter 135
Low Pass Spatial Filters 140
Robert's Filter 198
Sobel Filter 218
Spatial Masks 233
Weighted Mean Filter 254
+ Spatial Frequency Filters Class
Circularly Symmetric Filter 37
Homomorphic Filter 119
Inverse Filter 130
Least Mean Squares Filter 136
Wiener Filter 260
Wiener Filter (Parametric) 26112 Class Groupings Index
+ Storage Formats Class
Graphics Interchange Format (GIF) 90
Joint Photographic Experts Group (JPEG) 133
MacPaint File Format (MAC) 141
PC Paintbrush (PCX} 180
Tagged interchange File Format (TIF) 225
Discrete Cosine Transform 6?
Discrete Fourier Transform 70
Fourier Transform Properties 81
Hadamard Transform 9
Hartley Transform 105
Hough Transform 122
Stance Transform 212
Waish Transform 248Adaptive DW-MTM Filter 13
CLASS: Adaptive Filters
DESCRIPTION:
The adaptive double window modified trimmed mean (DW-
MTM) filter overcomes the difficullies of using the MMSE
filter in the presence of impulsive noise by using the median
estimator (9 estimate the local mean. A new loval mean is
then computed using oaly pixels within a small graylevel
Tange about the median. This effectively removes outers in
ihe calculation of the mean estimate, bence improving the
averall performance of the mean filter in the presence of
outliers such as salt and pepper noise.
The adaptive DW-MTM filter algeritam is described as
follows. Given a pixel located at x. y within the image, a
median fiker (MED{g(x, y)}) is contputed within an n x n
local region surrounding the location x, y. The median
value computed from this filter is used 30 estimate the mean
value of the n Rows x IMAGE->Cols pixel image
stored in the structure IMAGE. The program also assumes
that the standard deviation of the noise and the threshoid
parameter K is passed to the program upon execuuon. The
program first computes the median in a 3 x 3 local window.
This median value is then used to determine which points
are excluded in the 5 x 5 mean filter calculation. Upon
completion of the program, the fitered image is stored in the
structure IMAGE1.
DWHTM_filter{struct Image *IMAGE, struct
image *IMAGE!. fleat NSTB, float K}
{
int x, ¥, xl, yl, med(9];
int median;
int gray. i. j, temp;
dong int tokai, sum, R:
R=IMAGE-=Cols;
for(Y= 2; YCols-2; M++}
{Adaptive DW-MTM Filter 15
; vie=l; yl+s}
for (xls-1l; xl<=1; x1l++)
{
med{tetal]=
* CIMAGE->Data+
S4+x1+(long! (Ysy1)*R)?
totalstotal+];
}
fordijel; je-8; 44+)
{
temp = medi];
i=j-l;
while(ie=0 && med[i] »temp)
{
med(it+lj= medlil;
ded-1,
}
med [i+i)= temp;
médian=med{4];
sum=0; total=0;
for{yl=-2; yl<«=2; yl++)
for(tul=-2; xl<=2; x1++}
{
gray=: * (IMAGE->Data
+KXexl+ (long) (Y¥tyl) "Ble
if tgray>=(median-K*NSTD} }
if (gray<={median+K*NSTD}}
t
sum=aum+gray +
total = total +1;
t
}
*{IMAGEL--Datard+ (long) T*R)
=funsigned char)
(ffloat)sum/(floatitotall;
t
I
|“ SSBE ALSO: Adaptive MMSE Filter16 * Adaptive Filters
Adaptive Filters
DESCRIPTION:
Depending un the type of noise that is present within an
image, the type of filter that is chosen to remove the noise
can radically affect the important details thal must remain
udaffected by che filtering operation, For example,
Gaussian type aoise is better filtered using a mean filver
while Sait and Pepper type noise is better filtered using a
median filter. The disadvansage of using a mean filter over
a median filter is that the mean filter removes high spatial
frequencies which blurs the edges within an image.
An ideal filter to use on an image is a filler that adaptively
changes its [iltcring characieristics depending on the image
content within a local window, For example, if the image
content within a local window contains only edges, then a
median filler would be used 1o preserve the edge details. If,
wilhin a local window, a uniform background is detected,
then the filler would change ils charactlerislics 10 perform a
mean filter within this local window. Filless that
dynamically change their fillering characteristics as they are
scanned through the image are known as adaptive filters.
CLASS MEMBERSHIP:
DW-MTM Filter
MMSE Filter
SEE ALSO: Noise. Nonlinear and Spatial FiltersAdaptive MMSE Filter i7
CLASS: Adaptive Filters
DESCRIPTION;
The Minimum Mean-Square Error filter (MMSE) makes use
of the knowledge of the local variance to determine if a
mean filter is to be applied te the local region of an image.
This filer works best for additive type short tail noise. The
ouiput of ihe MMSE finer for a pixel located al the
coordinates x, y in an image g(x, y) is
2
on
1 ¥F= UT) 80 7+
9
K.
A, 1s?
where Pe is the vanance of the neise, and 3 is ihe local
variance About ahe pixel located at x, y.
The parameter K is the output from a jocal mean fier which
is usually a 3 x 3 or 5 X S$ mean filer. In background
tegions the local variance is approximately equal to the noise
vafiance reducing the adaptive MMSE to approximaiely a
local mean filtor. Near edges, the local variance will be
much larger than the noise variance and the adaptive MMSE
filter reduces to the original unfiltered image.
EXAMPLE:
tal {oy
(a) The original addidve Gaussian noise corrupted image
{variance = 200) and (8) the 5 x 5 MMSE filtered image.18 Adaptive MMSE Filter
ALGORITHM:
The program assumes thal the original image is a 256
graylevel x IMAGE->Rows x IMAGE->Cols pixel image
stored in the structure IMAGE. The program also assumes
ahat the variance of the noise is Known and is passed te the
program upon execulion. The program first computes the
local variance over a N x N window from the fecal first and
secood moments, Next, the program computes the MMSE
filter oulpmt using the N x N mean computed in the first
step, Upon completion of the program, the fitered image is
slored in the structure IMAGE1.
MMSE _filber(struct Image *IMAGE, Struct
Image *IMAGE], float NVAR}
{
ant x, ¥, «li, yl, N, gi
long int rotal, sum, suml, R;
float FSECOND, FVAR, FMEAN;
R=IMAGE->Cols;
N=5;
for{¥= Nr2; YRows-N/2: Y++}
for (ReN/2;X
sum l=
total =0;
for(yl=-N/2; yle«=N/2; yl+4)
{
for l[Ml=-N/2; xl<=N/2; x1l+4}
{
sumesum + * (IMAGE--Data +
Kexl+(long}) (¥+yi)*R};
sunlssuml +
tlong)*(iMAGE-=sData +
HX4x1+¢long) (Y¥+y1)*R)*
{long} * ({IMAGE->Data +
Mexl+ (long) (Y+y1)*R)
total = total +1:
I
PSECOND=(float}suml/
(float) total;
PHEAN = (float) sum:
{floatitotal;
FVARSFSECOND - FMEAN*FMEAN;
if(FVAR ==0.0)Adaptive MMSE Filter 19
g=f{int) (PMEAN+.5);
else
g=tint) ((1-NVAR/FVAR}* *1
TMAGE-sbata+k+(loang)¥*R) +
NVAR/FVAR* FHEAN+ .5) 7
ifig>255)
ge255;
lftged)
g=0;
* {IMAGEL->Data+X+ (longl ¥*R) =9;
}
}
SEE ALSO: Adaptive DW-MTM Filter20 Alpha-Trimmed Mean Filter
CLASS: Nonlinear Filters
DESCRIPTION:
The a(pha-trimmed mean filter is based on urder statistics
and varies between a median and a mean iilter. It is used
when an image contains both short and kong tailed types of
noise. For example, many images vontain both Gaussian
and salt and pepper type noise. Ty define the alpha-trimmed.
mean filter, all pixels sucrounding the pixel at the coordinate
x. ¥ in the image A which are specified by an input mask
A(i} are order from minimum to maximuni graylevel value
Ay SAp SAqSAgS- SAN] SAN,
The alpha-trimmed mean filter is hen given as
N-P
AlphaMean (A) =—— © Aj,
“Arise
where N - 2P is the total number of pixels included in the
average calculation.
EXAMPLE:
{a} (5)
{a} A Gaussian and salt and pepper noise corrupted image
and (&) the alpha-trimmed mean filtered image using a 5 x
5 square mask and P = 3,Alpha-Trimmed Mean Filter 21
ALGORITHM:
The program assumes that the onginal image is a 256
graylevel x IMAGE->Rows x IMAGE->Cols pixel image
stored in the structure (MAGE. The program computes the
alpha-irimmed mean filter over a sel of pixels contained
within a square N x N region of the image centered at the
pixel X,Y. The size of the filtering operation is determined
by the variable N and should be set to an odd number and be
less than 12. The parameter P passed to the program
determines how many ot the endpoint pixels are climinated
from the urdered data. For example, if P = 1, the pixels with
the minimum and maximum graylevel values are not
included in the avérage calculation. For the special cases of
P=(and P=(N * Nj/2 - 1/2 the alpha-trimmed mean filter
becomes the mean and median filters respectively. Upon
completion of the program, ihe alpha-wimmed mean filtered
image is stored in the structure IMAGE},
AlphaMéanistruct Image *IMAGE, int P,
struct Image *IMAGE1)
{
int #. ¥, I, J, SUM, 2;
int N, AR[121], A;
Nel;
for {¥sN.'2; YRows-N-Z; Y++!
for(XeN/2; KeIMAGE->Cols-Nys2: K+4i
{
2-0;
fortd=-N/2; d<=sNv2; J++!
for (I=-N/2; T<=N/2; I++)
{
AR[2] =* {IMAGE-+Data+#
+I+tlong: (¥4+7)
*IMAGE- =Cols};
Geri
for(J=1; J<«=NtN-1;74++)
4
A BRIT:
T=
whi lets 0 && AR{I] +A)
R(T+1]=AR [1];
=I-1;
~yeo es22 Alpha-Trimmed Mean Filter
AR(I+1]=A;
}
SUM=0;2Z=0:
for(JsP; TxeN*N-1-P; J++}
}
{
SUM = 50M + ARIJ];
Lee;
I
* (IMAGE]-»DataeX+(longiy
*IMAGE->Cols} = ijunsigned
char} { (float) sums (float}2Z
4.5);
SEE ALSO: Geometric, Yp. Harmonic and Arithmetic Mean
Filters, Median. and other Nonlinear Filters.Area 23
CLASS: Mensuration
DESCRIPTION:
Area algorithms measure the number of pixels contained
within an object in an image. The boundary of the object
tmust be defined, or the region wherein the object lies as well
as the pixel value to be counted as part of the object must be
known. In the image below, a region is delineated and a pair
of objects lie within the region. The area algorithm given
Tetums an area calculation for an object given a specific
gtaylevel. When the physical size of a pixel is known, the
actual area in size units can be calculated.
The example below shows the original image on the left. To
the right is a thresholded, binarized, and eroded image that
has a region drawn around a white object that represents the
“end" key. The white pixels in this region, when counted,
represent the area of the object.
EXAMPLE:
ALGORITHM:
The algorithm accepts an irnage structure pointer, In, that is
assumed te be an 8-bit unsigned character picture. The
coordinates of the region containing the object,
{xl y1,x2,y2), and the object graylevel, GbjVal, are aiso
passed. The area is computed by counting the number of
pixels in the area at the graylevel value. This vatue is
returned.
Objects that contain spaces or multiple objects within che
region Containing pixels at the object graylevel value are also24 Area
counted, Other methods of area calcuiation can avoid these
problems, but require the contour pixels of the object. If the
contout is known, the area can be calculated tine by tine by
counting candidate pixels between contour coordinates. The
macro pix is used for simplified access 10 the pixels of the
image by coordinates,
édefine pix(iIm,x,.yi 4
*(Im->Bata + (x}*Im->Cols + (¥}}
/* Compute and return area for objects */
int area(struct Image *In,int «l,int y1,
int “2,int ¥2,unsigned char Obj¥al)
{
ieng i.j,rows:
int area value = 0;
fortimal; is\-x2; ++4)
for(jsyl; jRows x IMAGE->Cols pixel image
stored in the structure IMAGE. The program then performs
aN x N arithmetic mean filter on the umage. The size of the
fitering operalion is determined by the variable N and
should be set tw an odd number. Upun completion af the
program the filtered image is stored in the suucture
IMAGE].
Meantstruct Image *IMAGE, struct Image
*IMAGEL)
{
int 4, ¥;
int I, d;
int M, SUM;
Ns?;
for{Y=sN/2; YDatatk
+I+tlong) (y+)
*IMAGE->Cols) ;
}
}
*{IMAGE]->Data+X+!long?Y
*IMAGE->Cols) = SUM/N/N:
}
SEE ALSO: Median, Minimum, Maximum, and othe:
Nonlinear FiltersBrightness Correction 27
CLASS: Histogram Operation
DESCRIPTION:
Brightness correction is used to enhance the visual
appearance of an image. Brightness modification of an
image is defined as
$j =g) + brightness .
where sj is the ith graylevel value of the brightness enhanced
image, and gj is the ith graylevei value of the original image.
EXAMPLE:
{a} ()
(a) The original image and (#) the brightness corrected
image.
ALGORITHM:
The program assumes that the original image is a 256
graylevel x IMAGE->Rows « IMAGE->Cols pixel image
stored in the structure IMAGE. The program adds the value
passed tc the program in the brightness parameter to each
pixel in the image. Upon completion of the program, the
brighiness corrected image is slored in the structure
IMAGE].
Brightness (struct Image *IMAGE, struct
Image “IMAGE1, int brightness)
{2 Brightness Correction
int xX, ¥, I:
for[y=0; YCols; X++)
1
= *(IMAGE->Data+k+ (long) ¥*
IMAGE->Cols} + brightness;
if{I> 255)
1=255;
ATi1<9)
Ist;
* (IMAGE1- »Data+K+!long)¥*
IMAGE->Cols} = 1;
}
SEE ALSCr Graylevel Histogram, Histogram Specification,
Contrast Correction, and Nonlinear Graylevel Transforma-
lionsC.LE. Chromaticity Diagram 29
CLASS: Color Image Processing
DESCRIPTION:
C.LE. Chromaticity Diagram
Wavelength
in
GREEN+ Nanometérs
+ WHITE
n=» ZMmMDG
RED Axis
1 =Red + Green + Blue
or
Blue = 1- Red - Green
SEE ALSO. RGB and YIQ Color Models, Hue and
Saturation Correction, and Pseudocolor30 Centroid
CLASS: Mensuration
DESCRIPTION;
Cenreid refers to center and is a measure of the geographic
center of an object. The centroid is expressed in coordinates
and is calculated using the following expression:
N N
1 1
Xe = % Ye =ahY
i=l iz]
where Xc and Y¢ are the centroid coordinates, X and ¥ ate
the coordinates of the ith object pixel, and A is the object
area. The centroid may also be calculated as the first central
moment. The example graphic illustrates the computation of
the centroid of an object using the formula given above.
EXAMPLE:
Xc = 39/13 = 3
O123456709
Ye = 32/13 =4 -
eatogrezia
ALGORITHM:
The algorithm accepts an 8-bit unsigned character image, In
and the coordinates of the region containing the object
(<1,y1,x2,y2) and the object graylevel, ObjVal. The object
area is computed by calling the area function (See Area).
Row and column coordinates are then accumulated for the
object. These accumulations are divided by the computed
area for the object and che resulting centroid coordinates are
Teturmed as the x and y members of the coord data suructure.
Integer arithmetic assures that a coordinate value will be
returned. If area_value computes to zero, negative
coordinates are returned.Centroid
/* coordinates structure */
struct coord {
fleat x,¥r
be
/* Compute and return centroid */
void *centroid(struct Image *In,int x1,
aint yl,int “2,int y2,
unsigned char ObjVal,
struct coord *coords}
long i,4;
int area_value, Xcent=0, Ycent=0;
area_value =
area(In, xl, yl, x2, y2,O0bjVal}:
if (area_value == 0) {
coords->x = -l; coords->y = -1;
return;
t
for (i=xl; i¢=_2; ++i)
for{j=yl; j<=y2: ++)){
if (pix (In, i, 4)==ObjVal}h{
Keent += ji
Yoent += i;
}
coords->x = Xcent/area_value + 1;
coords->y = Yeent/area value + 1;
return;
SEE ALSO: Area, Maximum Axis, Minimum Axis,
Moments
3132 Chain Code
CLASS: Coding and Compression
DESCRIPTION:
Chain Coding is used to define object boundaries for the
purpose of measurement and identification. A chain code
follows the boundary of an object and indicates the change
in direction according to a 4- or 8-direction pattern as shown
here:
1
3
The code is dependent on where on the boundary the code
sampling starts, but may be normalized using a oumber of
techniques.
EXAMPLE:
“
™~ Contour Mappings
~
Contour \7 \7
A contour is shown with the directional mappings given to
the right. The $-direction chain code for this contour would
be:
FTITTTLOOT I.Chain Code 33
ALGORITHM:
The routine is passed an image structure pointer of unsigned
character data containing the open ended contour, In, the x
and ¥ coordinates of the start ef the comtour to be coded, and
a threshold value, thresh, used to determine membership in
the contour from a pixel neighborhood. The routine is also
supplied a character suring pointer, code, which must be
initialized to che maximurn expected length of the code. The
routine returns the length of the code in the int poiater
length. Caution coust be exercised in that the code string will
comain zeroes that indicate the right horizontal direction and
Not siring termination.
The routine uses a macro, idx, to simplify coordinate access
into the image array.
#* compute 8-direction chain code given
start point of contour */
¥define idx{m,n)/
*{in->Datat (mi +( in} *In->Cols))
chain_8(struct Image *In, int x, int y,
char *code,int *Length,
unsigned char thresh)
*length = 1;
foris:){
idx(x,y} = 0;
if (idx (x+1, ¥} >thresh)} [{ ik => */
*l(code++) = O7
tH;
+e{tlength}y
continue:
)
iftidx(xti, y-1) >thresh)} [ im fowy
*{ooder+) = 1;
-oys teas
++ (*length};
continue;34 Chain Code
LE (idxix,y-l}othreshy { Ae yp re
x(code+4+) = 2;
--y;
++ *length):
continue:
i
Lf (idx(x-1,y-l)sthresh} { ee yee
*(code++} = 3;
TTY 2TcHe
++ (*length) :
continues
}
if (idx (#-1, y) >thresh} [ fk <- #f
s{eodert+) = 4;
oxi
++ (*length}:
continue;
}
Lf (idx (x-1, ytl)>thresh) { fe f o*f
*(codet++}) = 5;
tty pees
+4 (length) ;
continue:
}
if(idxtx, y+1) >thresh) { ft | wf
*(code++} = 6;
tty?
++ (*length) :
continue;
}
dE (ide (xt+1, y4+1}>thresh) | fe NF
*{codet+) = 7;
ttysttxy
++(*length);
continue;
I
return;
t
SEE ALSO: MensurationCircularity 35
CLASS: Mensuration
DESCRIPTION:
Circularity is the measure of an abject’s closeness to
circular shape. It can be expressed as the variance of the
boundary pixel's distance to the centroid, If the object is
perfecdy circular, this variance will be zero, The limit on
circularity is one-half the major axis.
ALGORITHM:
The circularity algorithm requires both she boundary pixels
of the object in question as well as the centroid. Fhe
boundary may be determined by submacting the single pixel
erosion (See Erosion) of the object from the object. The
algorithm accepts an $-bit unsigned character image, In and
the coordinates of the region containing the object (x1, yl,
x2, y2) boundary and the object grayievel, ObjVal. The
object centroid is computed by calling the centroid function
(See Centroid): then a euclidian distance measure is applied
to each boundary pixel with the centroid. These values are
accumulated and the variance calculated.
f* coordinates structure */
struct coord {
float x,¥:
he
/* Compute and return circularity */
double circularity (struct Image *In, long x1,
leng yi, long x2,long y2,
unsigned char GbjVal)
{
struct coord Ocenter;
Jong i,j, 1Tows,points=07
double mean=0.6, temp, stdev=0.0, variance;
cantroid{In, ¥1,yl,#2,y2, Ob jVal, cOcenter};
/* compute mean */
for{i=xl; lData;
N= filt->Rows;
for¢iwO,1sN-1; L */
‘* upper left quadrant */
*(E+CA*N+ GP) © (E+ (G*N+]} +1) = Huy:
/* lower right quadrant */
*CF+ (Liam) = *(F+(2*N4+m}41) = Huy;
/* upper right quadrant */
e(f+ti%N+m)) = *¢f£4+(itN+m) +1) = Buv:
/* lower left quadrant */
S(ft4L N+ j)) = *(L+(L*N+5}4+1) = Buy:
}
SEE ALSO: Fourier Transform40 Closing (Binary) Filter
CLASS: Morphological Filters
DESCRIPTION:
Morphological closing of @ binary object is defined as the
dilation of that object followed by ihe erasion of the dilated
abject. The closing filter operation will reduce small inward
bumps and small holes within the object,
Closet, B= (A BBOB
EXAMPLES:
Cd *
OBJECT A OBJECT A
* *
{a} (b}
{a) The original binary image of object A and (6) the closed
image of object A with a circular structuring function &.
Procasing | Procauang
(a) The original image and (#} the closed image.Closing (Binary) Filter 41
ALGORITHM:
The program assumes that the original binary image is an
IMAGE->Rows x IMAGE->Cols piael image with a
background graylevel value of 0 and an object graylevel
value of 255 (object) and is stored in the structure IMAGE.
The N x N structuring function is stored in array MASK{]f].
Upon completion of the program, the closed image is stored
in the structure FILTER. The binary erosion and dilation
functions used by the afgorithnt can be found under binary
erosion and dilation respectively.
fdefine MN 5
Closeistruct Image *IMAGE, int
MASK[} [MN], struct Image "FILTER}
t
aint X, ¥;
DilationtIMAGE. MASK, FILTER! ;
for(¥=0; YCols; K++!
(
*IMAGE->Data+ X +
tlongi¥ *IMAGE-> Cols: =
*{PILTER->Darat+ X + : long; ¥
* IMAGE->Cols);
}
ErosiontIMAGE, MASK, FILTER);
FILTER-»Rows =[MAGE->Rows;
PILTER-»Cols=IMAGE- Cols;
}
SEE ALSO: Binary Dilation, Erosion, and Opening Filters42 Closing (Graylevel) Filter
CLASS: Morphological Filters
DESCRIPTION:
Morphological closing of an image is defined as the
gtaylevel dilation of the image follawed by the graylevel
erosion of the dilated image. The closing filter operation
will reduce small negative oriented graylevel regions and
hegative graylevel noise regions generated from salt and
pepper noise.
Close(d, B= (ASRS
EXAMPLES:
(@)
{b)
fa) The original image and (&) the closed image using
an all zero 7 x 7 mask.Closing (Graylevel) Filter 43
ALGORITHM:
The program assumes that the original image is a 256
graylevel x IMAGE->Rows x IMAGE->Cols pixel image
stored in the structure IMAGE. The N x N structuring
fanciion is stored in the array MASK[][]. Upon completion
of the program, the closed image is stored in the structure
FILTER. The graylevel erosion and dilation functions used
by the algorithm can be found under graylevel erosion and
dilation respectively.
#define N 5
cCloseGray Istruct Image "IMAGE, int
MASK[] (NJ, struct Image *FILTER)
{
ion xX. YF
DilationGray (IMAGE, MASK, FILTER) ;
fort¥=0; YRows; Y++]
[
for{X=0; XDatat+x+ (long) ¥*
IMAGE-=>Cols)= *(FILTER->
Data+X+{ long) ¥* IMAGE=>Cols);
a
}
ErosionGray (IMAGE, MASK, FILTER);
FILTER ~>RoOws = IMAGE->Rows 7
FILTER ~-=Cols=IMAGE->Cols;
}
SEE ALSO: Graylevel Dilation, Erosion, Top-Hat, and
Opening Filters44 Clustering
CLASS: Mensuration
DESCRIPTION:
Clustering is the process of counting and labeling of objects
within an image. Clusters consist of pixel groupings that are
related to oné another by a predetermined measure. This
measure can be defined as a distance between clusters or a
similarity measure such as pixel value, or it may be a
complex set of identifiers and constraints that define
membership in a cluster. Fhe cluster may ultimately be
mapped into a binary image as a unique object.
One of the simplest approaches is to specify a size in pixels
tha: the clusters should be. If we assume a binary image, the
clustering algorithm processes each pixel and when one is
found that is nonzero, it becomes part of the first cluster and
is marked. The next nonzero pixel found is tested to see if
the distance between it and the previous pixel is less than or
equal to the desired cluster pixel size. If it is, itis marked as
a member of the first ctuster and the search coatinues. If it is
not, it becomes the first member of the second cluster and is
marked accordingly. This process continues until all pixels
have been evaluated.
The exampie shows a graphic of a set of similar-sized
objects that have been labelled by grayscale value,
EXAMPLE:Clustering 45
ALGORITHM:
The following variables, with their descriptions, are used in
the cluster analysis fuactioa:
i,jek Loop variables.
Tr Cluster count variable.
In Unsigned character image
data structure pointer to
Binary image,
In=>Rows Rows in Image
In->Cols Columns in Image
¢lustx[] Temporary storage for x—
coordinate of last pixel in
cluster [].
eclusty[] See clustx,
new clust Plag for new cluster.
edist Euclidean distance.
¢lust_size Size of clusters to be
found,
At the completion of the function, the Im data array contains
pixel values that reflect their membership in the clusters
found that meet the criteria of being clust_size apart.
Clusters of pixels that are larger than clust_size size will be.
partitioned into pieces as they are found in the image. The
image is scanned from the upper left corner origin ia
column-row order.
cluster({struct Image *In, int clust_size}
{
int mn, 4, 4, k, new _clust;
float edist, clustx[256),clusty[256]>
f* initialize cluster count variable n */
n=;
#* double—loop through each pixel
of binary image */
for{ie0; i < In->Rows; ++i)
for(jed; 4 < In->Cols; ++i)
if(n==0) {46 Clustering
/* only do for lst cluster */
if(pix(In,i,j) !=0){
nm=1; /* Ist cluster found */
/* store K coord, */
elust™[1] = i;
**® store y coord. */
olusty({1) = 4:
?#* mazk pixel as cluster 1 */
pintIn,i, 4} = 17
} }
/* test for membership in all
known clusters */
else if{pin(In,i,j} != 0)
¢* marker for new cluster */
new_clust = 0;
/* compute Euclidean distance */
for(k = 1; k <- ni ++kif
edist = sqrt ((i-clueex[ki}*
(i-clustx[k]}+
{j-clusty[k]}*
(j-clusty(k]}):
/* test against cluster
size desired */
iffedist <= ¢lust_size} [
/* set pixel te cluster number */
pix(In-i,j) =
funsigned char)k?:
Aew_¢lust = 1;
kant:
}
}
/* add a new cluster */
if{new_elust == 0 && n <= 255) {
nen +t 1;
clustu(n] = i:
clusty[n] = j:
pix({In,i,j) =
(unsigned charin:+ Coding and Compression 47
Coding and Compression Class
DESCRIPTION:
Coding is a broad term that can refer to data encryption,
Tepresentation, or compression schemes. Although images
are often encrypted for security purposes, these methods are
beyond the scope of this book. Only one representation
coding is covered in this book, chain codes. Chain codes
play an important role in image understanding and computer
vision studies and allow the user to describe contours and
edges in a compact and useful way.
Compression is an important aspect of image processing
simply because the data sets are so large. Clearly, an RGB,
512 x 512 pixel image at 24 bits per pixel requires 786,432
bytes of storage. Two such pictures exceed the capacity of
most magnetic floppy diskettes, If the image data is fairly
correlated, the compression ratio possible can exceed 50 10
1. Of course, the basic drawback to compression is the time
it takes to compress and the time it takes to decompress.
However, special hardware is available that can maximize
the speed of the compress/decompress cycle to real-time
image display rates.
Although image Storage Formats constitute a separate class
in themselves, formats often incorporate compression
schemes as a part of their standard.
CLASS MEMBERSHIP:
Chain Codes
Huffman Coding
Run Length Encoding (RLE)
SEE ALSO: Storage Formats48 * Color Image Processing
Color Image Processing
DESCRIPTION:
Anew and very powerful field of image processing is color
image processing. Color image processing can be separated
inte two general areas: (1) the enhancement of color images,
and {2} the pseudocoloring of graylevel images. The en-
hancemeat of a color image involves either the enhancement
of the three calor (Red, Bluc, and Green) components of an
image or the enhancement of thé tint or saturation of an
image. In contrast, pseudocoloring makes use of color to
represent the graylevel yalues of an image, In this way,
certain attributes of an image can be easily visualized. The
human eye is much more sensitive to color variations than to
graylevel variations. For example, arteries within an x-ray
image can be hilighted in red tw improve the accuracy in
locating Blocked arieries.
CLASS MEMBERSHIP:
C.LE, Chromaticity Diagram
Color Saturation Correction
Color Tint Correction
HSI Color Model
Pseudoculor
Pseuducolor Display
RGB Cojur Model
True-color Display
YIQ Color ModelColor Saturation Correction 49
CLASS: Color Image Processing
DESCRIPTION:
This algorithm adjusts the color saturation of an RGB color
image in a similar manner as the color control on a color
television. The algorithm is based upon the R ~ ¥, G ~ Y,
and B — Y color model used in many of today's color
twlevisions. The cotor components in terms of R, G, and B
are
R—¥ =6.70R - 0.596 - 0.11B
B— ¥ =-0.30R - 0,59G + 0,89B
G—Y =-0.30R + 0.41G - 0.2 1B,
where the (luminance component Y is defined as
Y¥ = 0.30R + 0.596 + 0.11B
ALGORITHM:
The program assumes thai the original image is a 24-bit
color image of spatial size of RED->Rows =x RED->Cois
pixels. The program assumes that the image is a RGB image
aml is stored in three structures: RED, GREEN, and BLUE.
Passed co the program in che variable saturation is an integer
humber that is greater chan zero lhat represents in percent the
amount wf color tp be added back tu the illuminance or black
and white tmage. The first thing the program does is
compute the R -'¥,G - Y. and B - ¥ components. Next, it
scales these values using ihe variable saturation and adds
them back io the iuminance component to generate the new
RGB values. Upon completion of the program, the saturated
corrected image is returned in the three structures RED,
GREEN, and BLUE.
Saturation struct Image *RED, struct Image
*ELUE, struct Image *GREEN,
int saturation)
f
int X, R¥, BY, G¥, RY1, BYl, G¥1;
int ¥, Rk. B. G Bl. Rl. Gl;
long inr K;50 Color Saturation Correction
for,¥=0: YCols;
Rl= *(RED->Dara+Ki;
Gl= *(GREEN->Data+Kl;
Bl= *(BLUE->Data+Kl i
RY1l= {fant} {7o* (int}R1-
$9* (int }Gl-11*(intiBi)/100);
BYl=((int} (-30* (int) Rl-
59* (int }G1+89* {int)B1)/100};
GYl=(int) ({-30% Lint} Ri+
41*(inct}Gl-ll*(int}Bl)/ido);
Yeidine) (30*(int}R1l+
59* (int) Gi+11* (int)B1)/100);
BY={BYl*saturation) /100;
RY=(R¥1*saturation} /100;
GY=(GY1*saturat ion) /100;
Re R¥ 4¥:G=e GY + ¥iB = BY + ¥:
iftR <@)
if(B <0}
ifte <0) Z
G£(R2255} Re255;
LE(B 2255) Be2So;
1ftG 2255) Ge256;
* (GREEN->Data+K) =
lunsigned char};
* (BLUE->Data+K} =
(unsigned char) Bs;
* (RED- »Data+k) =
(unsigned char! RB:
}
}
SEE ALSO: RGB, HSI and ¥1Q Color Models, €.1.E. Color
Chart, Pseudocotor, and Color Tint CorrectionColor Tint Correction 51
CLASS: Color Image Processing
DESCRIPTION:
This algorithm adjusts the lint of an RGB color image in a
similar manner as the tint control on a color television. The
algorithm is based upon the R- ¥, G— Y, and B — ¥ color
mode! used in many of today’s color televisions. The color
components in terms of R, G, and B are
R-Y¥ =0,70R —- 0.596 - GIB
B-Y =~0.30R ~0.59G + 0.89B
G-¥ =~0.30R + 0.41G - 0.2 1B,
where the illuminance component Y is defined as
Y =0.30R + 0.59G +0.11B
ALGORITHM:
The program assumes thal the original image is a 24-bit
color image of spatial size of RED->Rows * RED->Cols
pixels. The program assumes that the image is a RGB image
and is stored in three equal sized structures: RED, GREEN,
and BLUE. Passed to the program in the variable tint is an
integer number in degrees between -180 and +180 that
sepresenis the angle in which ihe tint of the image is to be
fotated. Positive angles give the color image more of a
green tint while negative angles give the color image more
of ared unt. The first thing the program does is compute the
R- ¥ and B - Y components. Neat. it rotates these two
yeclors by the angle given in the variable tint. Finally, the
program then recompnies the new RGB values placing them
m the three color image arrays. Upon completion of the
program, the tint corrected image is reiucned in the three
Structures RED, GREEN, and BLUE.
Bue{struct Image *RED, struct Image *BLUE,
struct image *GREEW, int = tinti
{
int X, RY, BY, GY, RYi, BY1;
int ¥. R, B, G. 5, ¢, Ri, GI, Bl;
long int K;52 Color Tint Correction
float theta;
thetas(3.14159*(floatirint) $160.0;
Ce286*cos ( (double) thetal+
5=256*sin( (double) theta);
foriY=0; YeRED=->Rows; Y44}
{
fortX=sQ; EKeRED->Cols; K++}
{
K=X+llong) Y*RED->Cols;
Rls *{RED->Data+K}:
Gi= * (GREEN->Data+k);
Bl= *{BLUE->Data+Kl;
RYl= (ints (70* (inti RI-
59* {int )G1l-11* (int) B1)/100);
BYl=({int) (-304 (int}R1-
59* (int) G1+89* (ine }E1}/100);
Ye(iint) ¢ 30* (int}R1+
59* (int )G1l+11* (int}B1)/100);
BYsiC*BYl - S*RYL) (2567
RY¥s{S*BY1l + C*RYL) /256;
G¥=t{ine) (-51l*Ry -
19"BY) /1001;
R= RY +¥;
if(R <0)
if(B <0}
ifig <0) G=0;
if(R2255) Re255;
if(B 2255) Bs255;
if(G +255) Ge255;
* {GREEN- sDabatkK) =
vunsigned char)G;
*(BLUE-=Data+Ki=
lunsigned char}B;
*(RED->Data+K)=
(unsigned char!R;
}
GY + %;B = BY + ¥;
3
SEE ALSO: RGB, HSI and YIQ Color Models, C.LE. Colar
Chart. Pseudocolor, and Color Saturation CorrectionCompactness 53
CLASS: Mensuration
DESCRIPTION:
Compaciness is an object distribution measure similar te
circularity, but is defined as the perimeter squared divided
by the area. The compactness value will be minimal for a
circular object, but it is not insensitive to noise in the
perimeter measure. Objects with noisy boundaries will have
a large perimeter measure with respect to similar sized
objects with smooth boundaries, hence different
compactness values in spite of circular shape. No algorithm
is given as compaciness can be readily calculated from the
area of the object and the area of the object boundary
(perimeter). Examples are shown, however, of compactness
measurements on various objects.
EXAMPLE:
Each of the objects shown has the same area, 32 pixels.
Compactness for each object is as follows:
A: 7.03 B:8 C:15.485 D: 15.15
Objects A and EB are similarly shaped and have very close
compactness values. Objects C and D, however, are very
different in shape in spite of identical compactness values.
SEE ALSO: Area, Circularity, Perimeter54 Contra-Harmonic Mean Filter
CLASS: Nonlinear Filters
DESCRIPTION:
The contra-harmonic mean filler is member of a set of
nonlinéar mean filters which are better at removing Gaussian
type noise and preserving edge features than the: arithmetic
mean filler. The contra-harmonic mean filter is very good at
Temoving positive culliers for negative values of P and
negative outliers for positive values of P. If all the pixets
included in the calculation of the contra-harmonic mean are
zero, the output of the conwa-harmonic fiker will also be
zero. The definition of the contra-harmonic mean filter is
EA +i, y+ pPel
G.j 3 mask, P = 2),Contra-Harmonic Mean Filter 55
ALGORITHM:
The program assumes that the original image is a 256
gtaylevel x IMAGE->Rows x TMAGE->Cols pixel image
stored in the structure IMAGE. The program computes the
contra-harmonic filter over a set of pixels contained within a
square N x N region of ihe image cenlered at the pixel X, Y.
The program expects the order of the filter io be passed to it
upon execution. The size of the filtering operation is
determined by ihe variable N and should be sei to an add
number and should be less than 12, Upon completion of the
program, the filtered image is slored in the siructure.
IMAGE1.
ContraharmonicMéanistruct Image *IMAGE.
int P, struct Image *IMAGEL1:
{
imt x, Y, I. J, a
int N;
imt AR(121), aA;
float SUM;
float SUM1;
N=5;
for{Y=N/2; ye IMAGE-2Rows-N/Z: Yre!
for(X=N/2; X<=IMAGE->Cols-N/2; ¥++)
f
a=0;
for(J=-N/2; JeaN/2; J++:
for tIa-N/2; IesNs2y; I++}
i
AR[Z] =* |IMAGE->Data+x
4Teilong! (Y+d)
*IMAGE->Calsi;
Zee;
}
Bed;
SUM=0.0;
SUM150,0;
for{J=0; Je=sN*N-1;7++}
{
if(AR[J]==0 && FeO!
Bel;
else
{
SUM=SUM+powl idouble)AR(J],
idouble} (P+lil;56 Contra-Harmonic Mean Filter
SUML=3UM1+powl (double!
AR[J], (double) PF;
}
}
if (Z==1}
*{IMAGE]-=Data+xX +(long)¥
*IMAGE->Cols) =0;
else
C
if (SQM1==0.0)
A=0.07
else
Asdint) (SUM/SUM1);
ifi{R >255)
As 255:
* (IMAGE]-+Data+X+ {longi ¥
*IMAGE->Cols} =
tunsigned char)A;
SEE ALSO: Geometric, Yp, Harmonic and Arithmetic mean
Filters, Median, and other Nonlinear Filters