[go: up one dir, main page]

0% found this document useful (0 votes)
14 views15 pages

Automatic Body Feature Extraction From A Marker-Le

This paper presents a novel method for automatic body feature extraction from marker-less scanned human bodies using image processing and computational geometry techniques. The authors successfully extracted 21 feature points and 35 feature lines from the torso of scanned subjects, achieving this in under 2 minutes of processing time. The method aims to facilitate accurate anthropometric data collection for applications in garment design and ergonomics without the need for reflective markers.

Uploaded by

j8hvb9cjwd
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views15 pages

Automatic Body Feature Extraction From A Marker-Le

This paper presents a novel method for automatic body feature extraction from marker-less scanned human bodies using image processing and computational geometry techniques. The authors successfully extracted 21 feature points and 35 feature lines from the torso of scanned subjects, achieving this in under 2 minutes of processing time. The method aims to facilitate accurate anthropometric data collection for applications in garment design and ergonomics without the need for reflective markers.

Uploaded by

j8hvb9cjwd
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

Computer-Aided Design 39 (2007) 568–582

www.elsevier.com/locate/cad

Automatic body feature extraction from a marker-less scanned human body


Iat-Fai Leong ∗ , Jing-Jing Fang, Ming-June Tsai
Department of Mechanical Engineering, National Cheng Kung University, 1 University Road, Tainan 701, Taiwan

Received 1 August 2006; accepted 10 March 2007

Abstract

In this paper, we propose a novel method of body feature extraction from a marker-less scanned body. The descriptions of human body features
mostly defined in ASTM (1999) and ISO (1989) are interpreted into logical mathematical definitions. Using these significant definitions, we
employ image processing and computational geometry techniques to identify, automatically, body features from the torso cloud points. We have
currently extracted 21 feature points and 35 feature lines on the human torso; this number may be extended if necessary. Moreover, less than
2 min processing time is taken for body feature extraction starting from the raw point cloud. This algorithm is successfully tested on several Asian
female adults who are aged from 18 to 60.
c 2007 Elsevier Ltd. All rights reserved.

Keywords: Feature identification; Body scanner; Computational geometry

1. Introduction on every slice of discrete points [10,11]. Ju et al. segmented


the body into 5 parts and determined the feature points layer
Anthropometry is an important issue in the field of human by layer by computing its circumferences [12]. Wang el at.
factor related production. A precise sizing system provides a used fuzzy logic to recognize features in unorganized cloud
useful foundation for the manufacture of daily commodities. points [13]. Pargas used the 3D scanner for measurement of
Among these, the design of human apparel, such as clothes, body dimensions in the garment industry [14]. In the famous
glasses, hats and footwear is most important, needing precise CAESAR project executed in Europe, Robinette et al. used a 3D
civilian anthropometric data. Body dimensions are usually tape- body scanner to collect the dimensions of the body [15]. Prior
measured by tailors. The accuracy of measurement is affected to scanning, reflective markers were attached to the anatomical
by the expertise of the operators and the cooperation of the landmarks of the subject. Then the positions of the landmarks
person to be measured. Conducting nationwide large-scale were determined semi-automatically. They used a neural net
anthropometry is time consuming and tedious. There is great to identify the positions of the landmarks, by means of feature
need for an automatic anthropometry system. Recently, the 3D points [16]. Ashdown constructed a sizing system using the data
body scanner has become a notable tool for anthropometry. from a 3D body scanner [17]. Wang developed an algorithm
Anthroscan [1], BodyShape [2], Cyberware [3], Gemini [4], to extract key features on the human body, then built a set
Hamamatsu [5], Inspeck [6], TC2 [7], TriForm [8], Vitus [9] of parametric surfaces to represent the scanned subject [18].
are some examples. However, the raw data taken from such Simmons compared different 3D scanners and concluded that a
3D scanners cannot be readily used by the industry because standard feature terminology and common feature recognition
the scanning points contain little meaningful information. software is needed for various scanners [19]. Turning the
The key points in solving the problem are data extraction, scanned data into useful information for design purposes is still
feature identification, and data symbolization. Therefore, much a long way off.
research effort has been put into the post-processing of the raw In this paper, we propose a novel method for body
data. Nurre separated the data into six regions by finding cusps feature extraction from the scanned human body. The semantic
definitions of body features found in ISO 8559 were interpreted
into a series of mathematical definitions [20]. A total of 21
∗ Corresponding author. Tel.: +886 922815240; fax: +886 62081181. feature points and 35 feature lines on the human torso were
E-mail address: n1889139@ccmail.ncku.edu.tw (I.-F. Leong). identified. Each feature stands for an important landmark

0010-4485/$ - see front matter c 2007 Elsevier Ltd. All rights reserved.
doi:10.1016/j.cad.2007.03.003
I.-F. Leong et al. / Computer-Aided Design 39 (2007) 568–582 569

definitions with reasonable proportions in case these definitions


are not properly applicable, and finally coded into computer
algorithm. In this way, the body features can be correctly
and uniquely found without ambiguity. The theorem and
methodology for feature identification are primarily based on
the image process technique [22], computational geometry [23]
and computer graphics [24]. The methods involved in this study
are briefly described below.

2.1. Image processing

Finding the grey level gradient of a specific pixel on the


depth map is the approach used most often. For a given image
function f (x, y), the gradient of grey level at (x, y) is defined
as:
∂f E ∂f E
∇ f (x, y) = i+ j. (1)
∂x ∂y
The gradient vector ∇ f points to the direction of the
maximum variation of grey level, and its magnitude is
s
 2  2
Fig. 1. Flowchart for automatic anthropometry system. ∂f ∂f
mag [∇ f ] = + . (2)
∂x ∂y
for garment making or the ergonomics industry. Contrary to
previous studies, markers were not needed in this study. The It is the maximum variation per unit length of f (x, y) in the
features were identified by geometric properties and common direction of ∇ f . Eq. (2) can be rewritten in the finite difference
proportions, thus eliminating the variations due to different form
operators. The overall goal of this research is to generate, 
mag [∇ f ] ∼ = ( f (x, y) − f (x + 1, y))2
automatically, a digital mannequin from the scanned body. With
the anatomical features embedded in the digital mannequin, the 1/2
automatic anthropometry system can be simply applied in the + ( f (x, y) − f (x, y + 1))2 . (3)
garment design industry, for example, as well as in ergonomic
In order to minimize computational cost, the magnitude of
applications. Fig. 1 shows the workflow for digital mannequin
∇ f is simplified as
generation. First, a subject is scanned by a full body scanner.
The generated point cloud is aligned along its principle axes and mag [∇ f ] ≈ | f (x, y) − f (x + 1, y)|
segmented into 5 major parts: arms, legs, and a torso-and-head + | f (x, y) − f (x, y + 1)| . (4)
segment [21]. Then, the torso is encoded into a ranged map, in
order to eliminate noise and fill up the void inside. Geometric This formula can be used to create a 2 × 2 mask for pixel
features are easily discovered in the encoded image. Finally computation. Many other masks that are used to enhance the
a parametric triangular tessellation is generated which retains boundary of a digital image can be derived from this basic
every feature in it. This paper focuses on the investigation of mask. By using the gradient theorem, the x-component of the
feature extraction as a base for the next stage in tessellation and gradient vector is
automatic anthropometry.
mag [∇ f ] (x) = ( f (x − 1, y − 1) + 2 f (x, y − 1)
This paper is organized as follows. In Section 2, a detailed
description is given for the techniques used for feature + f (x + 1, y − 1)) − ( f (x − 1, y + 1)
identification. In Section 3, we describe the methods for data + 2 f (x, y + 1) + f (x + 1, y + 1)) . (5)
alignment, segmentation and coded image representation. Next,
So that the y-component of the gradient vector is
we describe the definitions and search procedure for each
feature on the torso. Finally, we discuss the results of our mag [∇ f ] (y) = ( f (x + 1, y − 1) + 2 f (x + 1, y)
approach. + f (x + 1, y + 1)) − ( f (x − 1, y − 1)
+ 2 f (x − 1, y) + f (x − 1, y + 1)) . (6)
2. Mathematical theory
Laplace and Sobel masks were used for detecting features,
The feature extraction system in this study is based on the filtering noise and determining curve properties. Via these mask
descriptions of feature points and feature lines in the garment convolutions, the level index of a pixel indicates the variation
design literature, handbooks, and standards (ASTM and of the depth between the pixel and its surrounding pixels. On
ISO). The definitions are interpreted into logical mathematic the encoded torso range image data, the Robert mask is used
570 I.-F. Leong et al. / Computer-Aided Design 39 (2007) 568–582

Fig. 2. The 1-D Sobel mask. Fig. 4. The 1-D Laplace mask.

head. Many body feature points, including the front neck point,
can be revealed by this method.

2.1.2. Laplace mask


The Laplace mask is primarily used for representing
differential property on certain subjects. The operator is

∂2 f ∂2 f
∇2 f = + 2. (7)
∂x 2 ∂y
∇ 2 f denotes the second derivation of f . It represents the
slope variation at (x, y), or its curvature. The Laplace mask is
usually used for searching out the edge of a specific subject on
an image. Eq. (7) can be rewritten in finite difference equations
as follows:
∂f 1f
≈ ≈ 1x f (x, y) (8)
∂x 1x
1+
x f (x, y) = f (x + 1, y) − f (x, y) (9)
1−
x f (x, y) = f (x, y) − f (x − 1, y)
1+x f (x, y) − 1x f (x, y)

12 x = (10)
1x
∵ 1x = 1
∴ 12x f (x, y) = 12x f (x, y) − 1−
x f (x, y)
= f (x + 1, y) + f (x − 1, y) − 2 f (x, y). (11)

Similarly
12y f (x, y) = 1+
y f (x, y) − 1 y f (x, y)

= f (x, y + 1) + f (x, y − 1) − 2 f (x, y). (12)


Fig. 3. The Sobel curve along the x-coordinates corresponding to the front
centerline of the head. According to Eqs. (11) and (12), the Laplace mask used in
this study is designed as shown in Fig. 4.
to eliminate the noise (isolate) point. The Sobel mask is used
The Laplace mask is rather sensible to the noise in which
to find the gradient on the body surface, such as the centerline
its depth is usually apart from its surrounding pixels. Noise
(symmetry) of the body.
filtering using the Laplace mask is no different from some
2.1.1. Sobel mask other filtering methods used with Euclidian spatial points. The
procedure used in this study is conducted in 2D image space, it
The Sobel mask is commonly used for edge detection on
is simpler and faster than in 3D space. Moreover, the Laplace
digital images. In this work, the 1-D Sobel mask (see Fig. 2)
is used to obtain the 1st order derivation. It is then fitted into mask is also invoked to disclose the concave and convex feature
a curve of order 3 or less. Body feature points are located points in this study.
by applying given transformed masks to obtain the extreme
convex, concave, or inflexion points. Body feature lines are 2.2. Bending value method
usually revealed by applying a given mask on the depth image.
Therefore, the one-dimensional mask is of great use in the body The bending value method is involved in locating the
feature search algorithm. possible cusps of a curve [25]. Referring to Fig. 5, the
For example, the arrows in Fig. 3 point to the local bending value of Pi (xi , yi ) invokes its kth preceding pointPi−k
−−−−→
extremities on the centerline of the head to which the 1-D Sobel and its kth successor point Pi+k . The vectors Pi Pi+k =
−−−−→
mask has been applied. Those local inflection points represent (X fd (i), Yfd (i)) and Pi Pi−k = (X bd (i), Ybd (i)) denote the
some of the feature points on the front centerline of the human forward and backward difference of Pi respectively. In which,
I.-F. Leong et al. / Computer-Aided Design 39 (2007) 568–582 571

Fig. 5. Bending of the Pi .

X fd (i) = xi+k − xi
Yfd (i) = yi+k − yi
(13)
X bd (i) = xi−k − xi
Ybd (i) = yi−k − yi .
Referring to Fig. 5,

Pi Fk = Pi Pi+k + Pi Pi−k
= (X fd (i) + X bd (i), Yfd (i) + Ybd (i)). (14)
Let X c = |X fd (i) + X bd (i)|, and
Yc = |Yfd (i) + Ybd (i)|. (15)
The bending value of point P: is
p
Bv(i) = X c (i) ∗ X c (i) + Yc (i) ∗ Yc (i). (16)
Be aware that no bending values exist at the region of the
first and last k/2 points if, and only if, the subject curve is not a
Fig. 6. Method of obtaining a sectional curve.
closed loop.
manipulate the data in 3D space is of computational difficulty
2.3. Curve fitting and interpolation and time consuming. The purpose of post-processing the raw
scanned data is to sort the data into a more meaningful format,
A body feature curve that may pass through some feature
so that it can be conveniently used for feature recognition.
points should retain its original geometric properties. After
During the full body scanning, subjects are asked to keep
applying certain mask operators, a series of local points can
their arms and legs slightly separated. Although foot prints
be obtained. However, in some circumstances the distribution
inside the scanner guide the subjects as to where to face, the
of these points looks not curved, but rather zigzagged. In such
scanner and body are not sufficiently aligned. Therefore, the
circumstances, a lower order B-spline is used to approach these
point cloud is rectified according to the principal axis of their
points in order to generate a smoother curve.
tensor of inertia. The X , Y and Z axes are correspondent with
2.4. Intersection curve body thickness, width, and height, respectively, as shown in
Fig. 7.
The surface of the human body is rather smooth and In order to simplify the subsequence feature extraction
continuous. A plane passing through feature points will cut the algorithm, the point cloud is sliced into many layers along the
body in a smooth curve. However, data points obtained from Z axis. On each layer, the scanned point may form one or more
3D scanners are discrete in nature. A plane passing through loops. The slices are then segmented into torso, arms and legs
some points on the data set generally will neither meet another using a body segmentation scheme [21]. Fig. 8(a) shows the
point, nor a curve. However, if the point cloud is dense enough, result after the segmentation process.
collecting the points within a given distance of the plane will In order to take advantage of the image processing
“form” a sectional curve. Fig. 6 shows the flowchart to obtain techniques and avoid the complexity of the sculpture surface
the sectional curve from a data set. The points are distributed reconstruction, the point cloud is encoded into a 2D depth
at both sides of the plane. Sorting the point by the polar angles, map. By observing the shape of a human torso, we found that
we can “stitch” the sequence points together. The stitch lines it resembles a cylinder, so these points are transformed into
intersect the plane at the “sectional curve”. cylindrical coordinates (r , θ, z). The θ and z components are
used as image coordinates on the depth image. The distance
3. Pre-processing component r is mapped into a 16 bit grey intensity. The size of
the image is set to 720 × (torso height). Noise existing in the
The 3D data in its original format not only contains no scan data is filtered by the Laplace mask and the voids are then
geometric features, but also occupies a lot of memory space. To filled. Fig. 8(b) shows the coded image for the torso. During
572 I.-F. Leong et al. / Computer-Aided Design 39 (2007) 568–582

Fig. 7. Principal axes alignment.

the conversion process, if more than one data point resides in specific features and the scanned surface. The illustration of the
the same pixel, only the point with highest intensity, at the features of a female torso on a depth map is shown in Fig. 9.
furthest distance from the central axis, will be recorded onto the The proportion of head length to body height is useful.
image. The reason for retaining the most outer points is to avoid Because some of the features are not located at the extremities
ambiguity in case of portions of torso folding, for instance, or in the form of special geometry, local computation is needed
when the subject is rather fat. rather then using global processing. For example, the armpits
By assuming each pixel on the body image to be a circular are usually located in the region of the second head length
segment, the circumference of each horizontal layer is given by: portion, which can reduce time usage for the searching process.
Some of the relevant studies discovered that the ratio is about
720
X one-to-eight for Europeans [26]. But head-to-body proportion
Circumference = ri ∗ π/360. (17)
varies widely for different ethnic groups. In this study we set
i=0
the ratio to one-to-seven for Asians. The ratio is allowed to
Fig. 8(c) shows the circumference on each layer. The trend be changed for customized head-to-body proportion. Fig. 10
of circumference of each layer is useful information for the shows the frontal contour and proportions of an Asian human
location of body features. subject.

4. Feature identification 4.1. Feature points

In this research, body features are roughly classified into The body feature point search algorithms are based on their
feature points and feature lines. Feature points are mostly mathematic definitions. Unfortunately, there is no unanimity
located on the extremities of the body surface. Thus, feature on the definitions of the body features in the literature, or
points are defined by their respective geometries on the human in the Standards [20,27–30]. In this paper, we collected the
body. In turn, feature lines are defined as a group of points descriptions of body feature points for the garment industry,
which contain the same properties, such as zero-crossing points and interpreted them into corresponding logical definitions.
from the Sobel mask. In other words, feature lines can also Proper searching algorithms were developed, so that the
be defined as the intersection curve of a plane passing through features could be located automatically. These definitions were
I.-F. Leong et al. / Computer-Aided Design 39 (2007) 568–582 573

Fig. 8. Body segmentation: (a) point cloud; (b) coded depth map; (c) corresponding circumstance curve.

gathered by consulting experts in the fashion design studio where S f is the outline of the point cloud projected on the
and professionals from the apparel design industry in order Y Z plane as shown in Fig. 10. We apply bending value
to obtain a reasonable, unique, and feasible standard. Some method to obtain the maximum bending position as shown
selected feature points briefly illustrate our defined standard in Fig. 11.
below. The description is first, followed by the mathematic c. Shoulder point (acromion): the most prominent point on the
model. In the following sections, Sobel(Px ) denotes a 1-D upper edge of the acromial process of the shoulder blade
Sobel mask applied in the X -coordinate on the depth map. In (scapula). In body dimension tape-measurement, it is usually
these examples below, the definitions refer to search features detected by finger pressure.
on the left side of the torso. Since this feature is defined by skeletal structure which
a. Lower front (anterior) neck point: at the forefront of the cannot be explored from the skin surface, we define the
neck, located at the center of the concavity around the maximum curvature position from the front view contour as
intermediate of the right and left clavicles. the acromion. The search area is defined in the region:
The target region is located at T = {P ∈ S f | max(Pz ) − 2h ≤ PZ ≤ max(Pz ) − h}
T = {P ∈ CenterLine| max(Pz ) − 2h Acromion = Max(Bv(T )) (20)
≤ PZ ≤ max(Pz ) − h}. (18) where S f is the outline of the point cloud projected on the
The lower front neck point is given by Y Z plane.
Sobel(Px ) = 0, as shown in Fig. 3. d. Mid-shoulder point: the mid-point on the shoulder line.
b. Side neck point: at the base of the neck, located at the The point bisects the shoulder line into two equal length
intersection of cervical and shoulder lines. segments.
To obtain the side neck point, the target region is located The mid-shoulder point MidShoulderP is located on the
at shoulder line such that
T = {P ∈ S f | max(Pz ) − 1.5h ≤ PZ ≤ max(Pz ) − 0.5h} Len(SideNeckP, MidShoulderP)
SideNeckP = Max(Bv(T )) (19) = Len(MidShoulderP, Acromion) (21)
574 I.-F. Leong et al. / Computer-Aided Design 39 (2007) 568–582

Fig. 9. Illustration of features on the depth map.

where Len(A, B) denotes the curve length from point A to The left bust point is located at
point B.
T = {Pi |0◦ ≤ θ ≤ 90◦ , max(Pz ) − 1.5h
e. Armpit (front and back): the hollow at the joint of the arm ≤ Pz ≤ max(Pz ) − 2.5h}
and the shoulder.
BustP = {P|Sobel(Px ∈ T ) = 0}. (23)
Bending value method along X -coordinate is applied on
the front and back part of the second head length points zone. g. Shoulder blade point: the greatest protrusion on the large,
The cusps are searched layer by layer from the acromion triangular, flat bone situated at the back of the chest (thorax)
until the cusps are not detected and more than one loop between the 2nd and 7th ribs. The shoulder blade point is the
appears. A cusp is defined as the bending value exceeding most prominent protrusion of the upper back area.
a given threshold: The definition is similar to Eq. (27),
cusp = {P|Bv(Pi ) > Threshold}. (22) T = {Pi | 90◦ ≤ θ ≤ 180◦ , max(Pz ) − 1.5h
≤ Pz ≤ max(Pz ) − 2.5h}
Choose the cusps located in the final layer in which cusps
appear; these are set as the front and back armpits. ShoulderBladeP = {P|Sobel(Px ∈ T ) = 0}. (24)
f. Bust point: the apex of the breast; the most prominent h. Crotch point: the region of the human body where the legs
protrusion of the bra cup. separate from the pelvis.
I.-F. Leong et al. / Computer-Aided Design 39 (2007) 568–582 575

Fig. 10. Frontal contour and head–body ratio.

Fig. 12. Body centerline and crotch point.

Thus, the crotch point is given by


CrotchP = {P|∂ Pz /∂ Px = 0}. (26)

4.2. Feature lines


Fig. 11. Side neck points.
In practice, we usually find that there are many geodetic
We define the crotch point as located at the lowest lines that pass through certain feature points on a mannequin,
position of the centerlines. Due to obstructions to optical indicated as handy markers for garment making. They are
detection, voids usually exist in the crotch area. Therefore, mostly used for measurement of the dimensions of cloth needed
the crotch point is generated from the partial front and back to cover the body. However, there are virtually infinite ways
centerlines by quadratic curve fitting: of passing a line through given points. In addition, the body
feature lines have not been defined clearly in the literature.
Z = a X 2 + bX + c. (25) In this study, reasonable inferences are made to those feature
The curve would smoothly connect the front and back lines that are ambiguous in definition. Some of the feature
centerlines. Fig. 12 demonstrates the outcomes of the fitting lines described below are approximated by 4th order B-spline
curve. fitting using point set T , represented as CurveFitting(T ). In
576 I.-F. Leong et al. / Computer-Aided Design 39 (2007) 568–582

Fig. 13. Upper neck line and lower neck line.

addition, a spatial plane can be defined by three points, P1 , P2 , line in Fig. 13. We apply a general sine equation to fit the
and P3 , to be a cutting plane. The plane is used to intersect front section (white) in order to connect and generate the
with the torso in order to generate a feature line, such as back section (red) of the upper neck line. Be aware that the
CuttingPlane(P1 , P2 , P3 ) = 0. Be aware that the title of conjunction position of these two pieces of back upper neck
“girth” below represents the feature line itself and not the tape- line should be the same.
measurement dimension used in body sizing. d. Lower neck line: a smooth line at the neck base that passes
through the front neck point, the two side neck points, and
a. Front centerline: the intersection between the sagittal plane
the 7th spinal process.
and the front portion of the torso. In the front portion of the human neck, there is a signifi-
The front centerline is defined in the region of: cant increase of the cross sectional area from neck to torso.
T = {Pi | −20◦ ≤ θ ≤ 20◦ , Pz ≤ max(Pz ) − 1.5h} We define the front portion upper neck line in
T 0 = {Pi |Sobel(Pi ∈ T ) = 0} (27) T = {P|RightSideNeckP ≤ θ
≤ LeftSideNeckP; SideNeckP
FrontCenterline = {Pi |Pi ∈ LinearFitting(T )}.
0
(28)
≤ PZ ≤ FrontNeckP}
On a human body, the centerline is, theoretical speaking,
in the sagittal plane. These discrete points detected in Eq. T 0 = {Pi | max(Bv(T ))} (32)
(31) are not coplanar. The ideal sagittal plane is obtained LowerNeckline = CurveFitting(T ). 0
(33)
by linear fitting method on the depth map. Therefore, the According to the semantic definition, the back portion of
front centerline is the intersection of sagittal plane and the the lower neck line passes through the 7th spinal process.
torso. The LinearFitting(T 0 ) represents a linear equation that Due to the geometrical smoothness of the back neck region,
approximates all the points in T 0 . the lower back neck point is difficult to detect from the stan-
b. Back centerline: the intersection between the sagittal plane dard scanning pose. We invoke curve fitting across the left
and the back portion of the torso. and right side neck point to develop the lower neck line, as
Referring to Eq. (32), we obtain shown in the lower line of Fig. 13.
BackCenterline = {Pi |Pi ∈ LinearFitting(T 0 )}. (29) e. Shoulder line: this connects the side neck point to the shoul-
der point.
The back center line is the intersection point set between The shoulder line is obtained by
the ideal sagittal plane obtained from Eq. (32) and the back
T = {P|SideNeckP ≤ Py ≤ Arcomion}
portion of the torso.
c. Upper neck line: the boundary of the head and the neck. T 0 = {Pi |Sobel(PZ ) = 0} (34)
In the torso depth map, the upper neck line can be iden- Shoulderline = CurveFitting(T ). 0
(35)
tified by using bending value method in the Y Z projection
f. Shoulder girth: one of the lateral embraced line loops on the
plane. In the front portion, the target area is located by
body, passing through the two shoulder points and the front
T = {P|−90◦ ≤ θ ≤ 90◦ , max(Pz ) − 1.5h neck point.
≤ PZ ≤ max(Pz ) − 0.5h} We define a cutting plane
T 0 = {Pi | max(Bv(T ))} (30) ShoulderGirth = CuttingPlane(FrontNeckP,
UpperNeckLine = CurveFitting(T ). 0
(31) LeftShoulderP, RightShoulderP)
The upper neck line passes through the base of the skull ∩ Torso. (36)
at the back; nevertheless, that position is usually covered by g. Armscye (armhole): the slanted embraced line loop on the
hair (hollow). The upper neck line is separated into two sec- body passing through the shoulder point and the two break
tions, the front and the back sections, as shown in the upper points.
I.-F. Leong et al. / Computer-Aided Design 39 (2007) 568–582 577

We define a cutting plane WaistGirth = CuttingPlane(MinCircumference(T),


Armscye = CuttingPlane(LeftShoulderP, ParrelltoXYPlane) ∩ Torso. (42)
FrontArmpit, BackArmpit) ∩ Torso. (37) m.Hip girth: the maximum circumference on the belly between
h. Bust girth: the lateral embraced line loop on the body across the portion of mid-waist girth and the crotch girth.
two breast points. T = {P|Pz ≤ WaistGirth}
The bust girth is obtained by cutting the body with a hor- HipGirth = CuttingPlane(MaxCircumference(T),
izontal plane which passes the bust point.
ParrelltoXYPlane) ∩ Torso. (43)
BustGirth = CuttingPlane(RightBustP, LeftBustP,
n. Mid-waist girth: the horizontal embraced line loop under the
ParrelltoXYPlane) ∩ Torso. (38) waist girth on the body across the protrusion point of the
i. Front princess line: a contour curve starting from the mid- belly.
shoulder point through the bust point, crossing over the waist The horizontal girth passes the maximum protrusion
girth and hip girth to the front root of the foot. There are two point of the front centerline between the waist and the hip
front princess lines, located at each side of the left and right girth. The protrusion point is given by
portions of the body. ProtrusionP = {Sobel(Px ) = 0|P ∈ FrontCenterline,
The front princess line is composed of two sections, the HipGirth ≤ PZ ≤ WaistGirth}
upper front princess line and the lower princess line.
MidWaistGirth = CuttingPlane(ProtrusionP,
UpperFrontPrincessline = CuttingPlane(MidShoulderP,
ParrelltoXYPlane) ∩ Torso. (44)
BustP, ShouldBladeP) ∩ Torso
o. Crotch girth: the horizontal contour passing through and at
LowerFrontPrincessline = CuttingPlane(BustP, the same height as the crotch point.
FrontRootCenter, WaistCentroid) ∩ Torso
CrotchGirth = CuttingPlane(CrotchP, ParrelltoXYPlane)
FrontPrincessline = UpperFrontPr
∩ Torso. (45)
incessline ∪ LowerFrontPrincessline. (39)
p. Side line: the natural curve starting from the center of the
j. Back princess line: a contour curve starting from the mid- paired front and back armpits, crossing over the waist girth,
shoulder point through the shoulder blade point and crossing hip girth and stopping at the side root of the foot. There are
over the waist girth, hip girth, and stopping at the back root two side lines located at each side of the left and right por-
of the foot. There are two back princess lines, located at each tions of the body.
side of the left and right portions of the body.
Similar to the front princess line, the back princess line T = {P|0◦ ≤ θ ≤ 180◦ , CrotchGirth
is a composite of two sections, the upper back princess line ≤ Pz ≤ ArmpitGirth}
and the lower back princess line. T = {P|Sobel(T ) = 0}.
0
(46)
UpperBackPrincessline
The sequence of the feature searching process is important
= CuttingPlane(MidShoulderP, BustP, ShouldBladeP) since many of the body features are identified based on the
∩ Torso result obtained for others. For example, the shoulder line passes
LowerBackPrincessline = CuttingPlane through the side neck point and the acromion. Therefore, their
identification process must be performed in the correct order.
(ShoulderBladeP, BackRootCenter, WaistCentroid)
Table 1 lists the order of extraction handling we used in this
∩ Torso research.
BackPrincessline = UpperBackPrincessline
∪LowerBackPrincessline. (40) 4.3. System realization

k. Under bust girth: the lateral embraced line loop on the body Given the definitions stated in the previous section, we
close under the breasts. developed a program to extract features on a scanned human
T = {P|P ∈ FrontPrincessline, WaistGirth body. In this section we describe a few examples of the
application of the definitions in feature extraction.
≤ Pz ≤ BustGirth}
In the case of the center line, the aim is to determine the
TruningP = Max(Bv(T )) intersection between the sagittal plane and the front portion
UnderBustGirth = CuttingPlane(TurningP, of the torso. We applied a 1-D horizontal Sobel mask on the
ParrelltoXYPlane) ∩ Torso. (41) cylindrical range-coded image; the zero-crossing points of the
Sobel mask are the candidate points composing the center line.
l. Waist girth: the minimum lateral circumference on the body During the scanning process, the subject was asked to step on
between the region of bust girth and hip girth. a guided position printed on the platform. Therefore the search
T = {P|Pz ≤ BustGirth} of the symmetry plane is limited within ±20◦ on the image.
578 I.-F. Leong et al. / Computer-Aided Design 39 (2007) 568–582

Fig. 14. Standard scan pose of all subjects.

Table 1 fast due to range image representation which reduces the


The sequence of body feature extraction overhead of complex data structure. In other studies, the feature
Sequence Feature points Sequence Feature lines identification methods are directly applied on cloud points [12,
18], which might be affected by the amount of data points and
1 Armpits, shoulder points 2 Armhole
3 Centerline noisy data. We also found that image processing operations
4 Upper neck line are able to extract body features very efficiently in a range
image. For example, the Sobel mask is very useful when
5 Front neck point 7 Lower neck line
locating points with extreme value; Laplace and bending value
6 Side neck points are used for locating the point with maximal bending energy.
9 Mid-shoulder points 8 Shoulder lines
All features listed in the previous sections were analyzed and
10 Shoulder girth
then sequences of operations were designed to extract them
11 Breast points 12 Bust girth automatically. In Wang’s work [12] only 5 primary features
13 Shoulder blade points
were identified by their definitions, while the secondary
14 Waist girth
15 Hip girth features were located roughly using common proportions of
the human body. Thus the positions of the secondary features
16 Crotch point 17 Crotch girth
may or may not have represented the geometrical features of
18 Mid-waist girth the actual subject.
19 Princess lines
20 Under bust girth
21 Side lines
5. Results

This feature extraction automation software is developed by


After averaging the angles of the zero-crossing points, the entire using C++ language and executed on a Pentium IV 3.0 GHz
image is shifted left or right so that the sagittal plane aligns personal computer. The whole process takes less than 2 min to
properly with the coordinate system. identify every body feature starting from the raw point cloud.
Another example is the location of the side neck point. The Each of these features significantly stands for an important
frontal image is created by using the Y and Z coordinates as landmark in apparel design and anthropometry. There are
image coordinates. The frontal contour is obtained by using currently 21 feature points and 35 feature lines which are able
an edge detection mask. Bending value analysis is conducted to be identified and this number can be extended if needed.
over the entire length of the contour. The contour point with Several Asian female subjects of different ages and with
maximum bending value is considered as the side neck point. In obviously distinct figures were scanned and tested in the
order to validate the results, the horizontal and vertical distance developed software. Each subject was scanned three successive
between two side neck points should be within a given ratio times; between each scan the subject was asked to reposition
with respect to the distance between the two shoulder points. If herself in the scanner to evaluate the effect of positioning
the results do not meet the requirements, the system will prompt on measurement consistency. Table 2 lists summary data of
the operator to define the features manually. the subjects, and one example of their scan data is shown in
In this study, the number of scan points is reduced by using Fig. 14. The mean values of each measurement are also listed in
range image representation while maintaining most details. Table 2.
Hole filling and noise filtering can be achieved without complex In this study we used the mean absolute difference (MAD)
algorithms. Accessing points in any given region is relatively to quantify the consistency between each scan. Table 3 lists
I.-F. Leong et al. / Computer-Aided Design 39 (2007) 568–582 579

Table 2
Statistics of the subjects

Subject 1 2 3 4 5 Mean SD
Age 47 45 72 31 20 43.0 19.6
Weight (kg) 49 57 52 57 44 51.8 5.5
Stature (cm) 158 158 153 161 150 156.0 4.4
Acromial height, left (mm) 1314.7 1286.2 1298.8 1324.8 1206.8 1286.3 46.8
Acromial height, right (mm) 1313.7 1286.5 1282.5 1324.8 1196.2 1280.7 50.5
Acromial-girth, left (mm) 341.3 405.4 537.7 390.4 335.7 402.1 81.6
Acromial-girth, right (mm) 371.3 419.6 551.9 397.9 322.9 412.7 85.8
Axilla height, left (mm) 1216.3 1161.6 1127.2 1209.2 1108.9 1164.6 47.9
Axilla height, right (mm) 1203.3 1161.4 1115.4 1211.7 1108.1 1160.0 48.1
Cervical height (mm) 1305.0 1341.8 1339.4 1353.4 1258.1 1319.6 38.8
Bust points breadth (mm) 118.0 147.2 124.8 142.2 174.5 141.3 22.1
Chest height (mm) 1155.0 1116.7 1094.3 1163.0 1040.7 1113.9 49.6
Bust girth (mm) 852.3 889.1 944.8 875.4 796.4 871.6 54.1
Under bust height (mm) 1092.3 1060.1 1032.1 1103.1 993.8 1056.3 44.7
Under bust girth (mm) 729.7 746.2 806.6 806.6 642.9 746.4 67.5
Waist height (mm) 1010.0 979.0 958.7 1057.0 844.0 969.7 79.5
Waist girth (mm) 702.7 713.7 760.7 743.7 667.1 717.6 36.5
Hip height (mm) 823.0 764.4 765.1 832.1 732.1 783.4 42.6
Hip girth (mm) 896.0 948.0 944.0 1017.0 811.0 923.2 76.1
Crotch height (mm) 719.7 686.9 694.3 737.6 658.6 699.4 30.5

Table 3
MAD of individual subject

1 2 3 4 5 ANSUR allowable error (mm)


Acromial height, left (mm) 1.8 1.94 1.06 0.39 1.06 7
Acromial height, right (mm) 2.2 2.50 1.17 2.39 1.94 7
Acromial-girth, left (mm) 8.4 6.41 1.81 4.80 3.43 13
Acromial-girth, right (mm) 8.2 7.85 5.63 9.04 6.37 13
Axilla height, left (mm) 5.6 3.85 3.19 3.93 1.63 10
Axilla height, right (mm) 8.4 7.80 1.74 3.76 1.35 10
Cervical height (mm) 2.7 0.74 2.52 0.30 1.04 7
Bust points breadth (mm) 2.0 0.78 1.72 0.61 2.33 10
Chest height (mm) 3.3 0.56 2.56 1.00 4.11 11
Bust girth (mm) 1.8 2.59 5.41 2.15 7.19 15
Under bust height (mm) 2.2 1.41 1.30 0.70 4.41 N/Aa
Under bust girth (mm) 2.2 2.07 1.48 1.48 2.07 16
Waist height (mm) 2.0 1.67 1.44 3.67 1.00 11
Waist girth (mm) 3.6 1.15 5.24 2.57 2.63 11
Hip height (mm) 4.7 2.52 2.74 1.63 1.70 7
Hip girth (mm) 4.0 2.67 4.67 2.00 3.33 12
Crotch height (mm) 6.4 3.30 3.52 5.80 5.80 10
a The value is not listed in ANSUR.

the MAD values of each subject. Table 4 lists the height of depth space which is much more efficient than the computations
key features and the dimensions of the circumference along in the original complex 3D point cloud. Moreover, the voids
with the allowable error according to ANSUR [31]. All but produced by the body scanner are able to be filled using
one of the MAD are less than 5 mm. The circumference of a simple interpolation method on the depth image. Noises
the right armhole, 7.42 mm, is the only one greater than 5 mm. generated from the body scanner are easy to eliminate using
Coefficient of variation (CV) is also a common measurement of image processing techniques.
consistency, defined as the standard deviation (SD) divided by Based on the important geometrical definitions of body
the mean. In general, CV below 5% [32] represents consistency features, our algorithm is capable of identifying every
of the system. Of 17 measurements in this study, 16 CV are less body feature within minutes. Individuals with extreme body
than 2% while the remaining 1 is less than 3%. proportions are not included in this study. In such cases,
features may not be identified by its logical definitions, for
6. Discussion and conclusions example, the geometrical features of skin layers are not
manifest for fat adult subjects. In such cases, the common
An automatic body feature extraction algorithm based on proportions of human beings may be combined into these
image processing and computational geometry is presented. definitions to assist locating the possible positions of the
Our method in this study conducts the computations in 2D features. Fig. 15(b), (d) and (f) illustrate some of the represented
580 I.-F. Leong et al. / Computer-Aided Design 39 (2007) 568–582

Fig. 15. Sample exhibiting cases of body feature lines.

outcomes. We should also note that body data in overlapping of height and length of point-to-point measurements were
regions is not obtained in the point cloud; thus converting similar to the findings in CAESAR [32]. The MAD values
the scan data into a cylindrical range image would not be of the circumferences of bust girth, waist and hip girth
ambiguous, although the neighboring pixels across a “crease” were significantly better than the allowable errors, although
should be considered on two separate patches of skin rather than their MAD values were greater than that of height or length
on one continuous surface. measurements. The worst MAD values obtained in this research
When using this novel feature recognition algorithm, were for the circumferences of the armholes and the height of
markers stitched on the human body are no longer needed the crotch. The reason was that the armpit and crotch regions
to identify body features with the body scanner. In order to are obstructed during scanning, thus recreating the shapes of
test the reliability and usability of this system, each subject armholes and crotch regions in perfect geometry is almost
was scanned 3 times. All MAD values were less than the impossible. Additional local scans for the obstructed region
allowable errors proposed by ANSUR. The MAD values could improve the outcomes of the evaluation.
I.-F. Leong et al. / Computer-Aided Design 39 (2007) 568–582 581

Table 4
Feature extraction MAD and CV results
Items ANSUR allowable error (mm) Mean (mm) SD (mm) CV (%) MAD (mm)
Acromial height (left) 7 1286.3 1.7 0.13 1.24
Acromial height (right) 7 1280.7 2.8 0.22 2.04
Acromial-girth (left) 13 402.1 6.6 1.65 4.98
Acromial-girth (right) 13 412.7 10.2 2.47 7.42
Axilla height (left) 10 1164.6 5.0 0.43 3.63
Axilla height (right) 10 1160.0 6.1 0.53 4.62
Cervical height 7 1319.6 1.9 0.15 1.45
Bust points breadth 10 141.3 2.0 1.45 1.49
Chest height 11 1113.9 3.1 0.28 2.31
Bust girth 15 871.6 5.3 0.61 3.82
Under bust height N/Aa 1056.3 2.8 0.26 2.01
Under bust girth 16 746.4 2.5 0.33 1.87
Waist height 11 969.7 2.7 0.27 1.96
Waist girth 11 717.6 4.0 0.56 3.03
Hip height 7 783.4 3.6 0.46 2.65
Hip girth 12 923.2 4.6 0.50 3.33
Crotch height 10 700.4 6.8 0.97 4.96
a The data is not listed in ANSUR.

This research provides an important infrastructure for [12] Wang CL, Chang KK, Yuen MF. From laser-scanner data to feature
subsequent development of an automatic anthropometry human model: A system based on fuzzy logic concept. Computer-Aided
system integrated with an ordinary body scanner. Based on Design 2003;25(3):241–53.
[13] Ju X, Werghi N, Siebert JP. Automatic segmentation of 3D human body
the successful outcome of body feature identification, we scans. In: Proc. of IASTED international conference on computer graphics
can predict that the previously tedious and labor-intensive and imaging 2000.
collection of national sizing data will be much easier in future. [14] Pargas RP, Staples NJ, Davis JS. Automatic measurement extraction
for apparel from a three-dimensional body scan. Optics and Lasers in
7. Future work Engineering 1997;28:157–72.
[15] Robinette K, Daanen H, Paquet E. The Caesar project: A 3-D surface
anthropometry survey. In: Proc. of IEEE 2nd international conference on
The ongoing work for this study may be cataloged into 3-D digital imaging and modeling. 1999. p. 380–6.
six approaches. Firstly, the building up of a data base for [16] Robinette K, Boehmer M, Burnsides D. 3-D landmark detection and
standard models with their representative ages, sex, and figures. identification in the CAESAR project. In: Proc. 3rd international
Secondly, the application of similar methodology to build up a conference on 3-D digital imaging and modeling. 2001. p. 393–8.
database for the human head, arms, and legs with the aim of [17] Ashdown S, Loker S. Use of body scan data to design sizing systems
based on target markets. America National Textile Center Annual Report
constructing a realistic digital human. Thirdly, amending the 2001.
feature definitions in order to apply the algorithm to scanned [18] Wang CL. Parameterization and parametric design of mannequins.
men or children. Fourthly, expanding the outcomes of this study Computer-Aided Design 2005;37(1):83–98.
to anthropometry automation. Fifthly, developing instinctive [19] Simmons KP. Body measurement techniques: A comparison of three-
customized garment-draping design and manufacturing. Last dimensional body scanning and physical anthropometric methods. Ph.D.
dissertation. Raleigh (North Carolina): North Carolina State University;
but not least, making up ergonomic standards for tools, 2001.
facilities, or appliances needed by society. [20] International Organization for Standardization. Garment construction
and anthropometric surveys-body dimensions. Reference no. 8559-1989.
References Switzerland: ISO; 1989.
[21] Lin CC. A study on the development of a body scanner and processing of
[1] Anthroscan.http://www.human-solutions.com/apparel industry/ the scanned data. Master dissertation. Tainan (Taiwan): National Cheng
anthroscan en.php, 2005. Kung University; 2003.
[2] Bodyshape. http://www.bodyshapescanners.com/, 2005. [22] Gonzalez RC, Woods RE. Digital image processing. USA: Addison-
[3] Cyberware.http://www.cyberware.com, 2005. Wesley publishing; 1992.
[4] Gemini. http://www.oes.itri.org.tw/coretech/imaging/img 3di und 007. [23] Laszlo MJ. Computational geometry and computer graphics in C++.
html, 2005. USA: Prentice Hall, Inc.; 1996.
[5] Hamamatsu. http://usa.hamamatsu.com/sys-industrial/blscanner, 2005. [24] Anand VB. Computer graphics and geometric modeling for engineers.
[6] Inspeck. http://www.inspeck.com/, 2005. USA: John Wiley & Sons Inc.; 1993.
[7] TC2 . http://www.tc2.com/, 2005. [25] Wang MJ, Wu WY, Huang LK, Wang DM. Corner detection using
[8] TriForm. http://www.wwl.co.uk/, 2005. bending value. Pattern Recognition Letters 1995;16:575–83.
[9] Vitus. http://www.vitronic.com/, 2005. [26] Ratner P. 3-D human modeling and animation. 2nd ed. New York: Wiley;
[10] Nurre JH. Locating landmarks on human body scan data. In: Proc. of 2003. 55–7.
international conference on recent advances in 3-D digital imaging and [27] Cooklin G. Pattern grading for women’s clothes: The technology of sizing.
modeling. 1997. p. 289–95. Oxford: BSP Professional Books; 1990.
[11] Nurre JH, Connor J, Lewark EA, Collier JS. On segmenting the three- [28] Seitz T, Balzulat J, Bubb H. Anthropometry and measurement of posture
dimensional scan data of a human body. IEEE Transactions on Medical and motion. International Journal of Industrial Ergonomics 2000;25:
Imaging 2000;19(8):787–97. 447–53.
582 I.-F. Leong et al. / Computer-Aided Design 39 (2007) 568–582

[29] Solinger J. Apparel manufacturing handbook: Analysis, principles, Jing-Jing Fang is an associate professor in the Depart-
practice. 2nd ed. Columbia (SC): Bobbin Media Corp; 1988. ment of Mechanical Engineering in National Cheng
[30] Taylor PJ, Shoben MM. Grading for the fashion industry: The theory and Kung University, Taiwan. She leads her research team
practice. Cheltenham: LCFS Fashion Media; 1990. working on the area of digital mannequin, 3D gar-
[31] Gordon CC, Bradtmiller B, Clausen CE, Churchill T, McConville JT, ment design, pattern generating, image-based surgical
planning, and surgical navigation. Her research inter-
Tebbetts I, et al. 1987–1988 Anthropometric survey of US Army
ests are geometric modeling, object-oriented design,
personnel. Methods & summary statistics. Natick/TR-89-044. Natick,
and virtual reality applications. She received her BS
MA: US Army Natick Research Development and Engineering Center;
and M.Sc. in applied mathematics in Taiwan, 1984,
1989.
and Ph.D. in mechanical and chemical engineering in Heriot-Watt University,
[32] Robinette KM, Daanen HAM. Precision of the CAESAR scan-extracted Britain, 1996.
measurements. Applied Ergonomics 2006;37(3):259–65.

Iat-Fai Leong is a Ph.D. student at National Cheng Ming-June Tsai is a professor in the Department of
Kung University. He graduated in 1998 with a BS Mechanical Engineering in National Cheng Kung Uni-
degree in Mechanical Engineering and received a versity, Taiwan. He received his Ph.D. in Mechani-
M.Sc. degree in 2000, all at National Cheng Kung cal Engineering at Ohio State University, 1986. His
University, Taiwan. His research interests are in research interests are robotics and automation, image
the areas of computer graphics and computer-aided process and feature recognition, design of optical in-
geometric design. spection systems and geometrical reverse engineering
systems.

You might also like