Automatic Body Feature Extraction From A Marker-Le
Automatic Body Feature Extraction From A Marker-Le
www.elsevier.com/locate/cad
Abstract
In this paper, we propose a novel method of body feature extraction from a marker-less scanned body. The descriptions of human body features
mostly defined in ASTM (1999) and ISO (1989) are interpreted into logical mathematical definitions. Using these significant definitions, we
employ image processing and computational geometry techniques to identify, automatically, body features from the torso cloud points. We have
currently extracted 21 feature points and 35 feature lines on the human torso; this number may be extended if necessary. Moreover, less than
2 min processing time is taken for body feature extraction starting from the raw point cloud. This algorithm is successfully tested on several Asian
female adults who are aged from 18 to 60.
c 2007 Elsevier Ltd. All rights reserved.
0010-4485/$ - see front matter c 2007 Elsevier Ltd. All rights reserved.
doi:10.1016/j.cad.2007.03.003
I.-F. Leong et al. / Computer-Aided Design 39 (2007) 568–582 569
Fig. 2. The 1-D Sobel mask. Fig. 4. The 1-D Laplace mask.
head. Many body feature points, including the front neck point,
can be revealed by this method.
∂2 f ∂2 f
∇2 f = + 2. (7)
∂x 2 ∂y
∇ 2 f denotes the second derivation of f . It represents the
slope variation at (x, y), or its curvature. The Laplace mask is
usually used for searching out the edge of a specific subject on
an image. Eq. (7) can be rewritten in finite difference equations
as follows:
∂f 1f
≈ ≈ 1x f (x, y) (8)
∂x 1x
1+
x f (x, y) = f (x + 1, y) − f (x, y) (9)
1−
x f (x, y) = f (x, y) − f (x − 1, y)
1+x f (x, y) − 1x f (x, y)
−
12 x = (10)
1x
∵ 1x = 1
∴ 12x f (x, y) = 12x f (x, y) − 1−
x f (x, y)
= f (x + 1, y) + f (x − 1, y) − 2 f (x, y). (11)
Similarly
12y f (x, y) = 1+
y f (x, y) − 1 y f (x, y)
−
X fd (i) = xi+k − xi
Yfd (i) = yi+k − yi
(13)
X bd (i) = xi−k − xi
Ybd (i) = yi−k − yi .
Referring to Fig. 5,
Pi Fk = Pi Pi+k + Pi Pi−k
= (X fd (i) + X bd (i), Yfd (i) + Ybd (i)). (14)
Let X c = |X fd (i) + X bd (i)|, and
Yc = |Yfd (i) + Ybd (i)|. (15)
The bending value of point P: is
p
Bv(i) = X c (i) ∗ X c (i) + Yc (i) ∗ Yc (i). (16)
Be aware that no bending values exist at the region of the
first and last k/2 points if, and only if, the subject curve is not a
Fig. 6. Method of obtaining a sectional curve.
closed loop.
manipulate the data in 3D space is of computational difficulty
2.3. Curve fitting and interpolation and time consuming. The purpose of post-processing the raw
scanned data is to sort the data into a more meaningful format,
A body feature curve that may pass through some feature
so that it can be conveniently used for feature recognition.
points should retain its original geometric properties. After
During the full body scanning, subjects are asked to keep
applying certain mask operators, a series of local points can
their arms and legs slightly separated. Although foot prints
be obtained. However, in some circumstances the distribution
inside the scanner guide the subjects as to where to face, the
of these points looks not curved, but rather zigzagged. In such
scanner and body are not sufficiently aligned. Therefore, the
circumstances, a lower order B-spline is used to approach these
point cloud is rectified according to the principal axis of their
points in order to generate a smoother curve.
tensor of inertia. The X , Y and Z axes are correspondent with
2.4. Intersection curve body thickness, width, and height, respectively, as shown in
Fig. 7.
The surface of the human body is rather smooth and In order to simplify the subsequence feature extraction
continuous. A plane passing through feature points will cut the algorithm, the point cloud is sliced into many layers along the
body in a smooth curve. However, data points obtained from Z axis. On each layer, the scanned point may form one or more
3D scanners are discrete in nature. A plane passing through loops. The slices are then segmented into torso, arms and legs
some points on the data set generally will neither meet another using a body segmentation scheme [21]. Fig. 8(a) shows the
point, nor a curve. However, if the point cloud is dense enough, result after the segmentation process.
collecting the points within a given distance of the plane will In order to take advantage of the image processing
“form” a sectional curve. Fig. 6 shows the flowchart to obtain techniques and avoid the complexity of the sculpture surface
the sectional curve from a data set. The points are distributed reconstruction, the point cloud is encoded into a 2D depth
at both sides of the plane. Sorting the point by the polar angles, map. By observing the shape of a human torso, we found that
we can “stitch” the sequence points together. The stitch lines it resembles a cylinder, so these points are transformed into
intersect the plane at the “sectional curve”. cylindrical coordinates (r , θ, z). The θ and z components are
used as image coordinates on the depth image. The distance
3. Pre-processing component r is mapped into a 16 bit grey intensity. The size of
the image is set to 720 × (torso height). Noise existing in the
The 3D data in its original format not only contains no scan data is filtered by the Laplace mask and the voids are then
geometric features, but also occupies a lot of memory space. To filled. Fig. 8(b) shows the coded image for the torso. During
572 I.-F. Leong et al. / Computer-Aided Design 39 (2007) 568–582
the conversion process, if more than one data point resides in specific features and the scanned surface. The illustration of the
the same pixel, only the point with highest intensity, at the features of a female torso on a depth map is shown in Fig. 9.
furthest distance from the central axis, will be recorded onto the The proportion of head length to body height is useful.
image. The reason for retaining the most outer points is to avoid Because some of the features are not located at the extremities
ambiguity in case of portions of torso folding, for instance, or in the form of special geometry, local computation is needed
when the subject is rather fat. rather then using global processing. For example, the armpits
By assuming each pixel on the body image to be a circular are usually located in the region of the second head length
segment, the circumference of each horizontal layer is given by: portion, which can reduce time usage for the searching process.
Some of the relevant studies discovered that the ratio is about
720
X one-to-eight for Europeans [26]. But head-to-body proportion
Circumference = ri ∗ π/360. (17)
varies widely for different ethnic groups. In this study we set
i=0
the ratio to one-to-seven for Asians. The ratio is allowed to
Fig. 8(c) shows the circumference on each layer. The trend be changed for customized head-to-body proportion. Fig. 10
of circumference of each layer is useful information for the shows the frontal contour and proportions of an Asian human
location of body features. subject.
In this research, body features are roughly classified into The body feature point search algorithms are based on their
feature points and feature lines. Feature points are mostly mathematic definitions. Unfortunately, there is no unanimity
located on the extremities of the body surface. Thus, feature on the definitions of the body features in the literature, or
points are defined by their respective geometries on the human in the Standards [20,27–30]. In this paper, we collected the
body. In turn, feature lines are defined as a group of points descriptions of body feature points for the garment industry,
which contain the same properties, such as zero-crossing points and interpreted them into corresponding logical definitions.
from the Sobel mask. In other words, feature lines can also Proper searching algorithms were developed, so that the
be defined as the intersection curve of a plane passing through features could be located automatically. These definitions were
I.-F. Leong et al. / Computer-Aided Design 39 (2007) 568–582 573
Fig. 8. Body segmentation: (a) point cloud; (b) coded depth map; (c) corresponding circumstance curve.
gathered by consulting experts in the fashion design studio where S f is the outline of the point cloud projected on the
and professionals from the apparel design industry in order Y Z plane as shown in Fig. 10. We apply bending value
to obtain a reasonable, unique, and feasible standard. Some method to obtain the maximum bending position as shown
selected feature points briefly illustrate our defined standard in Fig. 11.
below. The description is first, followed by the mathematic c. Shoulder point (acromion): the most prominent point on the
model. In the following sections, Sobel(Px ) denotes a 1-D upper edge of the acromial process of the shoulder blade
Sobel mask applied in the X -coordinate on the depth map. In (scapula). In body dimension tape-measurement, it is usually
these examples below, the definitions refer to search features detected by finger pressure.
on the left side of the torso. Since this feature is defined by skeletal structure which
a. Lower front (anterior) neck point: at the forefront of the cannot be explored from the skin surface, we define the
neck, located at the center of the concavity around the maximum curvature position from the front view contour as
intermediate of the right and left clavicles. the acromion. The search area is defined in the region:
The target region is located at T = {P ∈ S f | max(Pz ) − 2h ≤ PZ ≤ max(Pz ) − h}
T = {P ∈ CenterLine| max(Pz ) − 2h Acromion = Max(Bv(T )) (20)
≤ PZ ≤ max(Pz ) − h}. (18) where S f is the outline of the point cloud projected on the
The lower front neck point is given by Y Z plane.
Sobel(Px ) = 0, as shown in Fig. 3. d. Mid-shoulder point: the mid-point on the shoulder line.
b. Side neck point: at the base of the neck, located at the The point bisects the shoulder line into two equal length
intersection of cervical and shoulder lines. segments.
To obtain the side neck point, the target region is located The mid-shoulder point MidShoulderP is located on the
at shoulder line such that
T = {P ∈ S f | max(Pz ) − 1.5h ≤ PZ ≤ max(Pz ) − 0.5h} Len(SideNeckP, MidShoulderP)
SideNeckP = Max(Bv(T )) (19) = Len(MidShoulderP, Acromion) (21)
574 I.-F. Leong et al. / Computer-Aided Design 39 (2007) 568–582
where Len(A, B) denotes the curve length from point A to The left bust point is located at
point B.
T = {Pi |0◦ ≤ θ ≤ 90◦ , max(Pz ) − 1.5h
e. Armpit (front and back): the hollow at the joint of the arm ≤ Pz ≤ max(Pz ) − 2.5h}
and the shoulder.
BustP = {P|Sobel(Px ∈ T ) = 0}. (23)
Bending value method along X -coordinate is applied on
the front and back part of the second head length points zone. g. Shoulder blade point: the greatest protrusion on the large,
The cusps are searched layer by layer from the acromion triangular, flat bone situated at the back of the chest (thorax)
until the cusps are not detected and more than one loop between the 2nd and 7th ribs. The shoulder blade point is the
appears. A cusp is defined as the bending value exceeding most prominent protrusion of the upper back area.
a given threshold: The definition is similar to Eq. (27),
cusp = {P|Bv(Pi ) > Threshold}. (22) T = {Pi | 90◦ ≤ θ ≤ 180◦ , max(Pz ) − 1.5h
≤ Pz ≤ max(Pz ) − 2.5h}
Choose the cusps located in the final layer in which cusps
appear; these are set as the front and back armpits. ShoulderBladeP = {P|Sobel(Px ∈ T ) = 0}. (24)
f. Bust point: the apex of the breast; the most prominent h. Crotch point: the region of the human body where the legs
protrusion of the bra cup. separate from the pelvis.
I.-F. Leong et al. / Computer-Aided Design 39 (2007) 568–582 575
addition, a spatial plane can be defined by three points, P1 , P2 , line in Fig. 13. We apply a general sine equation to fit the
and P3 , to be a cutting plane. The plane is used to intersect front section (white) in order to connect and generate the
with the torso in order to generate a feature line, such as back section (red) of the upper neck line. Be aware that the
CuttingPlane(P1 , P2 , P3 ) = 0. Be aware that the title of conjunction position of these two pieces of back upper neck
“girth” below represents the feature line itself and not the tape- line should be the same.
measurement dimension used in body sizing. d. Lower neck line: a smooth line at the neck base that passes
through the front neck point, the two side neck points, and
a. Front centerline: the intersection between the sagittal plane
the 7th spinal process.
and the front portion of the torso. In the front portion of the human neck, there is a signifi-
The front centerline is defined in the region of: cant increase of the cross sectional area from neck to torso.
T = {Pi | −20◦ ≤ θ ≤ 20◦ , Pz ≤ max(Pz ) − 1.5h} We define the front portion upper neck line in
T 0 = {Pi |Sobel(Pi ∈ T ) = 0} (27) T = {P|RightSideNeckP ≤ θ
≤ LeftSideNeckP; SideNeckP
FrontCenterline = {Pi |Pi ∈ LinearFitting(T )}.
0
(28)
≤ PZ ≤ FrontNeckP}
On a human body, the centerline is, theoretical speaking,
in the sagittal plane. These discrete points detected in Eq. T 0 = {Pi | max(Bv(T ))} (32)
(31) are not coplanar. The ideal sagittal plane is obtained LowerNeckline = CurveFitting(T ). 0
(33)
by linear fitting method on the depth map. Therefore, the According to the semantic definition, the back portion of
front centerline is the intersection of sagittal plane and the the lower neck line passes through the 7th spinal process.
torso. The LinearFitting(T 0 ) represents a linear equation that Due to the geometrical smoothness of the back neck region,
approximates all the points in T 0 . the lower back neck point is difficult to detect from the stan-
b. Back centerline: the intersection between the sagittal plane dard scanning pose. We invoke curve fitting across the left
and the back portion of the torso. and right side neck point to develop the lower neck line, as
Referring to Eq. (32), we obtain shown in the lower line of Fig. 13.
BackCenterline = {Pi |Pi ∈ LinearFitting(T 0 )}. (29) e. Shoulder line: this connects the side neck point to the shoul-
der point.
The back center line is the intersection point set between The shoulder line is obtained by
the ideal sagittal plane obtained from Eq. (32) and the back
T = {P|SideNeckP ≤ Py ≤ Arcomion}
portion of the torso.
c. Upper neck line: the boundary of the head and the neck. T 0 = {Pi |Sobel(PZ ) = 0} (34)
In the torso depth map, the upper neck line can be iden- Shoulderline = CurveFitting(T ). 0
(35)
tified by using bending value method in the Y Z projection
f. Shoulder girth: one of the lateral embraced line loops on the
plane. In the front portion, the target area is located by
body, passing through the two shoulder points and the front
T = {P|−90◦ ≤ θ ≤ 90◦ , max(Pz ) − 1.5h neck point.
≤ PZ ≤ max(Pz ) − 0.5h} We define a cutting plane
T 0 = {Pi | max(Bv(T ))} (30) ShoulderGirth = CuttingPlane(FrontNeckP,
UpperNeckLine = CurveFitting(T ). 0
(31) LeftShoulderP, RightShoulderP)
The upper neck line passes through the base of the skull ∩ Torso. (36)
at the back; nevertheless, that position is usually covered by g. Armscye (armhole): the slanted embraced line loop on the
hair (hollow). The upper neck line is separated into two sec- body passing through the shoulder point and the two break
tions, the front and the back sections, as shown in the upper points.
I.-F. Leong et al. / Computer-Aided Design 39 (2007) 568–582 577
k. Under bust girth: the lateral embraced line loop on the body Given the definitions stated in the previous section, we
close under the breasts. developed a program to extract features on a scanned human
T = {P|P ∈ FrontPrincessline, WaistGirth body. In this section we describe a few examples of the
application of the definitions in feature extraction.
≤ Pz ≤ BustGirth}
In the case of the center line, the aim is to determine the
TruningP = Max(Bv(T )) intersection between the sagittal plane and the front portion
UnderBustGirth = CuttingPlane(TurningP, of the torso. We applied a 1-D horizontal Sobel mask on the
ParrelltoXYPlane) ∩ Torso. (41) cylindrical range-coded image; the zero-crossing points of the
Sobel mask are the candidate points composing the center line.
l. Waist girth: the minimum lateral circumference on the body During the scanning process, the subject was asked to step on
between the region of bust girth and hip girth. a guided position printed on the platform. Therefore the search
T = {P|Pz ≤ BustGirth} of the symmetry plane is limited within ±20◦ on the image.
578 I.-F. Leong et al. / Computer-Aided Design 39 (2007) 568–582
Table 2
Statistics of the subjects
Subject 1 2 3 4 5 Mean SD
Age 47 45 72 31 20 43.0 19.6
Weight (kg) 49 57 52 57 44 51.8 5.5
Stature (cm) 158 158 153 161 150 156.0 4.4
Acromial height, left (mm) 1314.7 1286.2 1298.8 1324.8 1206.8 1286.3 46.8
Acromial height, right (mm) 1313.7 1286.5 1282.5 1324.8 1196.2 1280.7 50.5
Acromial-girth, left (mm) 341.3 405.4 537.7 390.4 335.7 402.1 81.6
Acromial-girth, right (mm) 371.3 419.6 551.9 397.9 322.9 412.7 85.8
Axilla height, left (mm) 1216.3 1161.6 1127.2 1209.2 1108.9 1164.6 47.9
Axilla height, right (mm) 1203.3 1161.4 1115.4 1211.7 1108.1 1160.0 48.1
Cervical height (mm) 1305.0 1341.8 1339.4 1353.4 1258.1 1319.6 38.8
Bust points breadth (mm) 118.0 147.2 124.8 142.2 174.5 141.3 22.1
Chest height (mm) 1155.0 1116.7 1094.3 1163.0 1040.7 1113.9 49.6
Bust girth (mm) 852.3 889.1 944.8 875.4 796.4 871.6 54.1
Under bust height (mm) 1092.3 1060.1 1032.1 1103.1 993.8 1056.3 44.7
Under bust girth (mm) 729.7 746.2 806.6 806.6 642.9 746.4 67.5
Waist height (mm) 1010.0 979.0 958.7 1057.0 844.0 969.7 79.5
Waist girth (mm) 702.7 713.7 760.7 743.7 667.1 717.6 36.5
Hip height (mm) 823.0 764.4 765.1 832.1 732.1 783.4 42.6
Hip girth (mm) 896.0 948.0 944.0 1017.0 811.0 923.2 76.1
Crotch height (mm) 719.7 686.9 694.3 737.6 658.6 699.4 30.5
Table 3
MAD of individual subject
the MAD values of each subject. Table 4 lists the height of depth space which is much more efficient than the computations
key features and the dimensions of the circumference along in the original complex 3D point cloud. Moreover, the voids
with the allowable error according to ANSUR [31]. All but produced by the body scanner are able to be filled using
one of the MAD are less than 5 mm. The circumference of a simple interpolation method on the depth image. Noises
the right armhole, 7.42 mm, is the only one greater than 5 mm. generated from the body scanner are easy to eliminate using
Coefficient of variation (CV) is also a common measurement of image processing techniques.
consistency, defined as the standard deviation (SD) divided by Based on the important geometrical definitions of body
the mean. In general, CV below 5% [32] represents consistency features, our algorithm is capable of identifying every
of the system. Of 17 measurements in this study, 16 CV are less body feature within minutes. Individuals with extreme body
than 2% while the remaining 1 is less than 3%. proportions are not included in this study. In such cases,
features may not be identified by its logical definitions, for
6. Discussion and conclusions example, the geometrical features of skin layers are not
manifest for fat adult subjects. In such cases, the common
An automatic body feature extraction algorithm based on proportions of human beings may be combined into these
image processing and computational geometry is presented. definitions to assist locating the possible positions of the
Our method in this study conducts the computations in 2D features. Fig. 15(b), (d) and (f) illustrate some of the represented
580 I.-F. Leong et al. / Computer-Aided Design 39 (2007) 568–582
outcomes. We should also note that body data in overlapping of height and length of point-to-point measurements were
regions is not obtained in the point cloud; thus converting similar to the findings in CAESAR [32]. The MAD values
the scan data into a cylindrical range image would not be of the circumferences of bust girth, waist and hip girth
ambiguous, although the neighboring pixels across a “crease” were significantly better than the allowable errors, although
should be considered on two separate patches of skin rather than their MAD values were greater than that of height or length
on one continuous surface. measurements. The worst MAD values obtained in this research
When using this novel feature recognition algorithm, were for the circumferences of the armholes and the height of
markers stitched on the human body are no longer needed the crotch. The reason was that the armpit and crotch regions
to identify body features with the body scanner. In order to are obstructed during scanning, thus recreating the shapes of
test the reliability and usability of this system, each subject armholes and crotch regions in perfect geometry is almost
was scanned 3 times. All MAD values were less than the impossible. Additional local scans for the obstructed region
allowable errors proposed by ANSUR. The MAD values could improve the outcomes of the evaluation.
I.-F. Leong et al. / Computer-Aided Design 39 (2007) 568–582 581
Table 4
Feature extraction MAD and CV results
Items ANSUR allowable error (mm) Mean (mm) SD (mm) CV (%) MAD (mm)
Acromial height (left) 7 1286.3 1.7 0.13 1.24
Acromial height (right) 7 1280.7 2.8 0.22 2.04
Acromial-girth (left) 13 402.1 6.6 1.65 4.98
Acromial-girth (right) 13 412.7 10.2 2.47 7.42
Axilla height (left) 10 1164.6 5.0 0.43 3.63
Axilla height (right) 10 1160.0 6.1 0.53 4.62
Cervical height 7 1319.6 1.9 0.15 1.45
Bust points breadth 10 141.3 2.0 1.45 1.49
Chest height 11 1113.9 3.1 0.28 2.31
Bust girth 15 871.6 5.3 0.61 3.82
Under bust height N/Aa 1056.3 2.8 0.26 2.01
Under bust girth 16 746.4 2.5 0.33 1.87
Waist height 11 969.7 2.7 0.27 1.96
Waist girth 11 717.6 4.0 0.56 3.03
Hip height 7 783.4 3.6 0.46 2.65
Hip girth 12 923.2 4.6 0.50 3.33
Crotch height 10 700.4 6.8 0.97 4.96
a The data is not listed in ANSUR.
This research provides an important infrastructure for [12] Wang CL, Chang KK, Yuen MF. From laser-scanner data to feature
subsequent development of an automatic anthropometry human model: A system based on fuzzy logic concept. Computer-Aided
system integrated with an ordinary body scanner. Based on Design 2003;25(3):241–53.
[13] Ju X, Werghi N, Siebert JP. Automatic segmentation of 3D human body
the successful outcome of body feature identification, we scans. In: Proc. of IASTED international conference on computer graphics
can predict that the previously tedious and labor-intensive and imaging 2000.
collection of national sizing data will be much easier in future. [14] Pargas RP, Staples NJ, Davis JS. Automatic measurement extraction
for apparel from a three-dimensional body scan. Optics and Lasers in
7. Future work Engineering 1997;28:157–72.
[15] Robinette K, Daanen H, Paquet E. The Caesar project: A 3-D surface
anthropometry survey. In: Proc. of IEEE 2nd international conference on
The ongoing work for this study may be cataloged into 3-D digital imaging and modeling. 1999. p. 380–6.
six approaches. Firstly, the building up of a data base for [16] Robinette K, Boehmer M, Burnsides D. 3-D landmark detection and
standard models with their representative ages, sex, and figures. identification in the CAESAR project. In: Proc. 3rd international
Secondly, the application of similar methodology to build up a conference on 3-D digital imaging and modeling. 2001. p. 393–8.
database for the human head, arms, and legs with the aim of [17] Ashdown S, Loker S. Use of body scan data to design sizing systems
based on target markets. America National Textile Center Annual Report
constructing a realistic digital human. Thirdly, amending the 2001.
feature definitions in order to apply the algorithm to scanned [18] Wang CL. Parameterization and parametric design of mannequins.
men or children. Fourthly, expanding the outcomes of this study Computer-Aided Design 2005;37(1):83–98.
to anthropometry automation. Fifthly, developing instinctive [19] Simmons KP. Body measurement techniques: A comparison of three-
customized garment-draping design and manufacturing. Last dimensional body scanning and physical anthropometric methods. Ph.D.
dissertation. Raleigh (North Carolina): North Carolina State University;
but not least, making up ergonomic standards for tools, 2001.
facilities, or appliances needed by society. [20] International Organization for Standardization. Garment construction
and anthropometric surveys-body dimensions. Reference no. 8559-1989.
References Switzerland: ISO; 1989.
[21] Lin CC. A study on the development of a body scanner and processing of
[1] Anthroscan.http://www.human-solutions.com/apparel industry/ the scanned data. Master dissertation. Tainan (Taiwan): National Cheng
anthroscan en.php, 2005. Kung University; 2003.
[2] Bodyshape. http://www.bodyshapescanners.com/, 2005. [22] Gonzalez RC, Woods RE. Digital image processing. USA: Addison-
[3] Cyberware.http://www.cyberware.com, 2005. Wesley publishing; 1992.
[4] Gemini. http://www.oes.itri.org.tw/coretech/imaging/img 3di und 007. [23] Laszlo MJ. Computational geometry and computer graphics in C++.
html, 2005. USA: Prentice Hall, Inc.; 1996.
[5] Hamamatsu. http://usa.hamamatsu.com/sys-industrial/blscanner, 2005. [24] Anand VB. Computer graphics and geometric modeling for engineers.
[6] Inspeck. http://www.inspeck.com/, 2005. USA: John Wiley & Sons Inc.; 1993.
[7] TC2 . http://www.tc2.com/, 2005. [25] Wang MJ, Wu WY, Huang LK, Wang DM. Corner detection using
[8] TriForm. http://www.wwl.co.uk/, 2005. bending value. Pattern Recognition Letters 1995;16:575–83.
[9] Vitus. http://www.vitronic.com/, 2005. [26] Ratner P. 3-D human modeling and animation. 2nd ed. New York: Wiley;
[10] Nurre JH. Locating landmarks on human body scan data. In: Proc. of 2003. 55–7.
international conference on recent advances in 3-D digital imaging and [27] Cooklin G. Pattern grading for women’s clothes: The technology of sizing.
modeling. 1997. p. 289–95. Oxford: BSP Professional Books; 1990.
[11] Nurre JH, Connor J, Lewark EA, Collier JS. On segmenting the three- [28] Seitz T, Balzulat J, Bubb H. Anthropometry and measurement of posture
dimensional scan data of a human body. IEEE Transactions on Medical and motion. International Journal of Industrial Ergonomics 2000;25:
Imaging 2000;19(8):787–97. 447–53.
582 I.-F. Leong et al. / Computer-Aided Design 39 (2007) 568–582
[29] Solinger J. Apparel manufacturing handbook: Analysis, principles, Jing-Jing Fang is an associate professor in the Depart-
practice. 2nd ed. Columbia (SC): Bobbin Media Corp; 1988. ment of Mechanical Engineering in National Cheng
[30] Taylor PJ, Shoben MM. Grading for the fashion industry: The theory and Kung University, Taiwan. She leads her research team
practice. Cheltenham: LCFS Fashion Media; 1990. working on the area of digital mannequin, 3D gar-
[31] Gordon CC, Bradtmiller B, Clausen CE, Churchill T, McConville JT, ment design, pattern generating, image-based surgical
planning, and surgical navigation. Her research inter-
Tebbetts I, et al. 1987–1988 Anthropometric survey of US Army
ests are geometric modeling, object-oriented design,
personnel. Methods & summary statistics. Natick/TR-89-044. Natick,
and virtual reality applications. She received her BS
MA: US Army Natick Research Development and Engineering Center;
and M.Sc. in applied mathematics in Taiwan, 1984,
1989.
and Ph.D. in mechanical and chemical engineering in Heriot-Watt University,
[32] Robinette KM, Daanen HAM. Precision of the CAESAR scan-extracted Britain, 1996.
measurements. Applied Ergonomics 2006;37(3):259–65.
Iat-Fai Leong is a Ph.D. student at National Cheng Ming-June Tsai is a professor in the Department of
Kung University. He graduated in 1998 with a BS Mechanical Engineering in National Cheng Kung Uni-
degree in Mechanical Engineering and received a versity, Taiwan. He received his Ph.D. in Mechani-
M.Sc. degree in 2000, all at National Cheng Kung cal Engineering at Ohio State University, 1986. His
University, Taiwan. His research interests are in research interests are robotics and automation, image
the areas of computer graphics and computer-aided process and feature recognition, design of optical in-
geometric design. spection systems and geometrical reverse engineering
systems.