[go: up one dir, main page]

0% found this document useful (0 votes)
113 views11 pages

Multi View Wire Art

The document describes a computational framework for automatically generating 3D wire sculptures from 2D line drawings provided at different viewpoints. The system takes line drawings and associated viewpoints as input and produces a 3D wire structure that recreates the line drawings from the different views. This is challenging due to conflicting constraints between views, which the framework addresses through techniques like voxel selection and curve deformation optimization.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
113 views11 pages

Multi View Wire Art

The document describes a computational framework for automatically generating 3D wire sculptures from 2D line drawings provided at different viewpoints. The system takes line drawings and associated viewpoints as input and produces a 3D wire structure that recreates the line drawings from the different views. This is challenging due to conflicting constraints between views, which the framework addresses through techniques like voxel selection and curve deformation optimization.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

Multi-view Wire Art

KAI-WEN HSIAO, National Tsing Hua University


JIA-BIN HUANG, Virginia Tech
HUNG-KUO CHU, National Tsing Hua University
Input
3D wire art

Fig. 1. We present an algorithm that takes line drawing images and the spatial arrangement of viewpoints as inputs and produces 3D wire sculpture art
showing distinct interpretations when viewed at different angles. The generated 3D wire sculptures can be used to cast distinct shadows onto three mutually
orthogonal planes (left), create different poses of an animated cartoon characters (middle), or exhibit two concepts at the same time, e.g., texts v.s. logo (right).
The 3D wire art can be appreciated either with light sources casting shadows onto external planar surfaces or directly viewing from certain viewpoints.

Wire art is the creation of three-dimensional sculptural art using wire strands. cubic splines and optimizing the curve deformation to best approximate the
As the 2D projection of a 3D wire sculpture forms line drawing patterns, provided line drawings. We demonstrate the effectiveness of our system for
it is possible to craft multi-view wire sculpture art — a static sculpture creating compelling multi-view wire sculptures in both simulation and 3D
with multiple (potentially very different) interpretations when perceived physical printouts.
at different viewpoints. Artists can effectively leverage this characteristic CCS Concepts: • Computing methodologies → 3D imaging; Paramet-
and produce compelling artistic effects. However, the creation of such multi- ric curve and surface models;
view wire sculpture is extremely time-consuming even by highly skilled
artists. In this paper, we present a computational framework for automatic Additional Key Words and Phrases: image-based modeling, wire art, skeleton
creation of multi-view 3D wire sculpture. Our system takes two or three extraction, shape deformation
user-specified line drawings and the associated viewpoints as inputs. We ACM Reference Format:
start with producing a sparse set of voxels via greedy selection approach Kai-Wen Hsiao, Jia-Bin Huang, and Hung-Kuo Chu. 2018. Multi-view Wire
such that their projections on the virtual cameras cover all the contour Art. ACM Trans. Graph. 37, 6, Article 242 (November 2018), 11 pages. https:
pixels of the input line drawings. The sparse set of voxels, however, do not //doi.org/10.1145/3272127.3275070
necessary form one single connected component. We introduce a constrained
3D pathfinding algorithm to link isolated groups of voxels into a connected 1 INTRODUCTION
component while maintaining the similarity between the projected voxels
and the line drawings. Using the reconstructed visual hull, we extract a Wire sculpture is a unique art form that creates complex objects out
curve skeleton and produce a collection of smooth 3D curves by fitting of wire. The use of wire as a medium for sculpturing has been widely
applied to furniture design [Postell 2012], crafting wire wrapped jew-
Authors’ addresses: Kai-Wen Hsiao, National Tsing Hua University, kevin30112@gmail.
com; Jia-Bin Huang, Virginia Tech, jbhuang@vt.edu; Hung-Kuo Chu, National Tsing elry [Iarussi et al. 2015; WigJig 2015], and wire sculptural art [2007].
Hua University, hkchu@cs.nthu.edu.tw. Very recently, French artist Matthieu Robert-Ortis has created com-
pelling wire sculptures that exhibit two distinct image interpreta-
Permission to make digital or hard copies of all or part of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed tions when viewed at different perspectives. Figure 2 presents two
for profit or commercial advantage and that copies bear this notice and the full citation examples of multi-view wire sculpture art. 1
on the first page. Copyrights for components of this work owned by others than ACM Similar to shadow art, multi-view wire art allows us to create
must be honored. Abstracting with credit is permitted. To copy otherwise, or republish,
to post on servers or to redistribute to lists, requires prior specific permission and/or a objects with multiple interpretable line drawings at different view-
fee. Request permissions from permissions@acm.org. points. Furthermore, wire art offers additional flexibility in that the
© 2018 Association for Computing Machinery.
1 Moreexamples can be found at http://cargocollective.com/matthieu-robert-ortis/
0730-0301/2018/11-ART242 $15.00
https://doi.org/10.1145/3272127.3275070 En-video

ACM Transactions on Graphics, Vol. 37, No. 6, Article 242. Publication date: November 2018.
242:2 • Hsiao, Huang, and Chu

Inputs Triple-consistent Pairwise-consistent


First view Int. view Second view
Fig. 3. Conflicting silhouette constraints. (Left) Three input line draw-
Fig. 2. Example of multi-view wire sculpture art. The anamorphose ings. (Middle) When constructing the visual hulls using the three input
sculpture created by the French sculptor Matthieu Robert-Ortis is a clas- silhouettes at mutually orthogonal viewpoints, selecting voxels by intersect-
sic example of multi-view wire art. When viewing from one specific angle, ing the generalized cones from all three views (i.e., triplet-consistent) only
we perceive a drawing of an elephant. When viewing from another view produces very sparse points at the projected view. (Right) Keeping voxels
point, the interpretation changes into two giraffes. The 2D projection in the that are consistent with at least a pair of views (i.e., pairwise-consistent)
intermediate view does not produce an interpretable image. helps increase the coverage of the desired line contour, but yield unrecog-
nizable image due to conflicting constraints.

wire sculpture does not require a light source for casting shadows
to see the hidden images. It is thus of great interests for enabling Our contributions. (1) A system for the automatic creation of multi-
designers and hobbyists to create multi-view wire art. However, view wire sculpture art using only user-specified 2D line drawings
designing and creating multi-view wire art is prohibitively difficult and the associated viewpoints. Our system is capable of generating
for novice users due to the required artistic skills and significant complex wire sculptures that are extremely difficult to design and
efforts in resolving conflicting contour constraints from two or more implement manually (e.g., three views); (2) A suite of computational
line drawings. techniques tailored to resolving conflicting constraints from multi-
ple views while maintaining similarity between the projections and
Our work. We present a system for enabling users to create the the input line drawings; (3) A characterization of quality over voxel
desired multi-view wire art sculpture. As shown in Figure 1, our resolution and the interval of viewing directions.
system takes two or three line drawing images and the associated
viewpoints provided by the user as inputs and produce the 3D wire 2 RELATED WORK
sculpture that forms the user-specified line drawings at different 3D reconstruction. Our problem builds upon the concepts and
viewpoints. This problem is challenging because the wire struc- techniques in 3D reconstruction from multiple 2D image observa-
ture satisfying all constraints by input line drawings often does tions. Structure-from-motion (SfM) [Schonberger and Frahm 2016;
not exist (see Figure 3). To tackle the challenge, we develop tools Snavely et al. 2006] and multi-view stereo (MVS) [Collins 1996; Fu-
for constructing the sculpture by balancing the smoothness of the rukawa and Ponce 2010; Goesele et al. 2007; Huang et al. 2018; Kuhn
curves and the similarity between the specified line drawing and et al. 2017; Schönberger et al. 2016] algorithms are capable of recon-
the projection at multiple viewpoints. structing detailed 3D models in unconstrained settings (e.g., photos
Specifically, our approach consists of two main stages: (1) visual from the Internet). We refer the readers to [Furukawa and Hernán-
hull reconstruction, (2) 3D curve skeleton extraction and defor- dez 2015] for a comprehensive overview of MVS algorithms. These
mation. In the first stage, we use a greedy selection approach to methods, however, rely on establishing point correspondence across
generate a collection of voxels covering all the contour pixels of images and therefore have difficulty in handling smooth regions
the input line drawings and introduce a constrained 3D pathfind- or thin wire structures. In light of this, several recent approaches
ing algorithm to link the resulting isolated components into one exploit higher order features such as lines or curves as primitives
single connected component that represents the final visual hull. for 3D reconstruction [Fabbri and Kimia 2010; Hofer et al. 2017;
Second, based on the reconstructed visual hull, we extract a 3D curve Liu et al. 2017; Usumezbas et al. 2016]. Assuming clean foreground-
skeleton by exploiting a novel quality measurement that captures background separation, visual hull and silhouette intersection based
projection errors and structural complexity of the curve skeleton. algorithms can be also applied to construct 3D models from im-
We then fit individual skeleton lines using cubic splines and perform age observations [Laurentini 1994; Lazebnik et al. 2007; Matusik
image-guided curve deformation to improve the projection error et al. 2000; Szeliski 1993] or from user-specified sketchs [Olsen et al.
with respect to the input line drawings. Although our algorithm runs 2009; Rivers et al. 2010]. Similar to visual hull based algorithms, our
automatically, we provide interactive tools that enable interactive method also uses multiple silhouettes (from user-specified line draw-
editing for users to repair or simplify the generated wire sculpture ings) as inputs to construct a 3D model of a wire sculpture where
using 2D strokes. We demonstrate the effectiveness of our system its projections match with the input silhouettes as much as possible.
on a wide variety of different line drawings. Numerous results show The difference lies in that our input silhouettes may conflict with
that the proposed method can produce multi-view wire sculptures each other (see an example in Figure 3) and produce many isolated
with small distortions in the projected views. components in the visual hull, causing difficulty in manufacturing

ACM Transactions on Graphics, Vol. 37, No. 6, Article 242. Publication date: November 2018.
Multi-view Wire Art • 242:3

Section 4.1 Section 4.2 Section 5.2 Section 5.3

(a) Inputs (b) Discrete visual hull (c) Connected visual hull (d) Extracted curve skeleton (e) Final wire sculpture

Fig. 4. Method overview. Given three input line drawing images (a), our system starts with reconstructing a discrete visual hull (b) through intersecting
generalized cones formed by back-projecting the 2D image to 3D using the associated camera parameters (mutually orthogonal viewpoints in this example).
We then integrate the isolated components (green voxels) into a connected visual hull (c) via a 3D pathfinding method that jointly analyzes the spatial
relations between components in 3D and their 2D counterparts. The traced 3D pathes are represented by blue voxels. Next, we apply a volumetric thinning
algorithm [Liu et al. 2010] to extract a curve skeleton from the reconstructed visual hull, followed by a structure simplification process that grounds a
tailor-made quality measurement to strike a balance between i) the projection error and ii) the structure compactness of resulting curve skeleton (d). Lastly,
we fit the individual skeleton lines (colored line segments) with cubic splines and employ an image-guided 3D curve deformation to obtain the final smooth,
continuous, and compact 3D wire sculpture (e). (Top row) The projections of intermediate products using associated camera parameters.

a physical wire sculpture. Our work focuses on resolving the incon- also falls into the category of 3D wire design and modeling but
sistency and constructing a collection of smooth, continuous curve targets fundamentally different context.
skeletons while minimizing the distortion between the projected
views and the input line drawings. 3 METHOD OVERVIEW
Our system takes input as a set of 2D line drawing images (or simply
Multiple interpretations from an object. Our work on creating images), I = {I 1 , ..., In }, along with the associated camera parame-
multi-view wire art is related to several approaches for producing ters specified by an user (i.e., pose, and perspective or orthogonal
multiple visual interpretations from a single object. The perception projection), P = {P 1 , ..., Pn }, where n = {2, 3} views in our experi-
of a static object (e.g., image, relief, sculpture) can vary dramatically ments. Our goal is to reconstruct a set of 3D curves C = {C 1 , ..., Ck }
depending on viewing distances [Oliva et al. 2006], figure-ground represented by cubic splines. These curves together form a smooth,
organization [Kuo et al. 2016], illumination from a certain direc- continuous, and compact 3D wire sculpture, which while seemingly
tion [Alexa and Matusik 2010; Bermano et al. 2012], viewing direc- irregular in both its geometry and structure, can precisely interpret
tions [Keiren et al. 2009; Sela and Elber 2007], or casting shadows the input images when viewed or cast shadows from the specified
onto external planar surfaces [Min et al. 2017; Mitra and Pauly 2009]. viewpoints. Figure 4 presents an overview of our system.
Won and Lee [2016] further extended the idea to create shadow the-
Pre-processing. For each input image, Ik , we first apply a simple
atre from dynamic objects (i.e., animated characters). Our work
thresholding and thinning algorithm to extract a set of one-pixel
differs in that we use 3D wire as a medium, allowing us to exhibit a
wide 2D curve lines. We represent these foreground pixels of Ik
clear prescribed line drawing either from certain viewing directions
as a graph G k = {Vk , Ek }, where Vk are pixels along the curve
or cast shadows on planar surfaces using a point light source.
lines and Ek encode their connectivities. If the graph G k contains
multiple connected components, we further add edges to connect
Wire sculpture design and modeling. More recently, there is a line them according to their spatial proximity and use a conventional
of works devoted to modeling and fabricating the wire sculptures of minimum spanning tree algorithm to get a connected graph.
a specific form. Iarussi et al. [2015] presented a computational frame- Given the preprocessed input images, our system starts by back-
work to assist the creation of wire wrapped jewelry. Liu et al. [2017] projecting the foreground pixels to the 3D domain using the asso-
extended the dimension from 2D to 3D modeling of wire sculptures ciated camera parameters to form a set of generalized cones [Lau-
using only a few input images of physical objects. Miguel et al. [2016] rentini 1994]. We then discretize the space into volumetric grids (or
proposed a computer-aided tool to facilitate converting a 3D model voxels) and find a minimum set of voxels such that the foreground
into a stable, self-supporting wire sculpture that is fabricatable with pixels in each view can be completely covered by the projections of
a 2D wire bending machine. Similarly, Zehnder et al. [2016] devel- voxels. Applying a flooding algorithm produces a discrete visual hull,
oped a computational design tool, targeting for the fabrication of which often contains an excessive amount of isolated components
ornamental curve networks defined on the 3D surfaces. Our work in our context (Figure 4(b), Section 4.1). To integrate these isolated

ACM Transactions on Graphics, Vol. 37, No. 6, Article 242. Publication date: November 2018.
242:4 • Hsiao, Huang, and Chu

components into one connected component, we first connect two


proximate components via a 3D pathfinding method that jointly
analyzes the spatial relations between two components in 3D and
their 2D counterparts. Then we solve a binary labeling problem
that retains necessary links to form a connected visual hull while
minimizing the projection error due to the additional inconsistent
voxels from the links (Figure 4(c), Section 4.2).
Initial visual hull Expanded visual hull
Here, the reconstructed visual hull represents a well-defined space
in 3D where the curves of final wire sculpture would lie in. We first Fig. 5. Initial vs. expanded visual hull. Given the same inputs as in Fig-
adopt a state-of-the-art volumetric thinning algorithm [Liu et al. ure 3, (Left) the initial visual hull with only triple-consistent voxels produces
2010] to effectively compute a shape- and topology-preserving curve sparse structure and incomplete projection. (Right) Exanding the initial
skeleton from the reconstructed visual hull. However, the extracted visual hull with voxels that are aware of both i) projection error and ii)
curve skeleton is typically complex in geometry, contains numerous spatial compactness leads to a projection covering complete image contours
redundant parts, and thus requires further refinement to be phys- without introducing severe artifacts.
ically realizable (e.g., 3D printing). To this end, we devise a novel
quality measurement tailored for wire modeling (Section 5.1) by it-
eratively removing the constituent skeleton lines to strike a balance these voxels leads to highly incomplete images due to the incon-
between projection error and structure compactness (Figure 4(d), sistency between the reference images (Figure 5(left)). Mitra and
Section 5.2). Finally, we obtain a smooth, continuous, and compact Pauly [2009] tackle the inconsistency problem by a global optimiza-
3D wire sculpture by fitting individual skeleton lines with paramet- tion that deforms the reference images. While such a deformation
ric cubic splines, followed by an image-guided curve deformation framework performs well for solid shapes, it becomes ill-posed for
to improve the projection accuracy with respect to the input images line drawing images that contain many hollow regions formed by
(Figure 4(e), Section 5.3). thin and complex line structures. We propose a simple yet effective
approach to further expand visual hull with voxels that are aware of
4 VISUAL HULL RECONSTRUCTION both i) projection error in 2D and ii) spatial compactness in 3D. For
Reconstructing a 3D wire sculpture from multiple reference 2D each of the incomplete pixel pk ∈ Vk , we can find a ray of voxels
images is related to shadow art [Mitra and Pauly 2009] as well as that map to pk . To favor voxels that have small projection errors
shape-from-silhouette applications [Laurentini 1994]. In this section, and are spatially close to the target visual hull, we define the cost
we describe a common step shared by existing works to reconstruct of the ith voxel x i as follows:
a 3D visual hull — a set of voxels whose projections best approximate n
D k (x̃ ik ) + (1 −
Õ Õ
the reference images using the associated camera parameters. With D prox (x i , x j )), (1)
the reconstructed visual hull, we can then extract a compact wire k =1 x j ∈N x i
sculpture with smooth, continuous 3D curves. Specifically, we aim to
where D k is a distance transform map that measures the projection
reconstruct the visual hull meeting the following two requirements:
error with respect to image G k . The term N x i contains the voxels in
(1) Completeness: all the foreground pixels in the reference line
the visual hull as well as 12 × 12 × 12 neighboring voxels to x i . The
drawings should be covered by the projections of the visual hull
function D prox (x i , x j ) computes the proximity between voxels using
(Section 4.1); (2) Connectivity: the constituent voxels of the visual
a normalized Gaussian kernel with µ = x i and σ = 2. Then, we
hull must form a connected component (Section 4.2).
employ a greedy algorithm that expands the visual hull by selecting
the least-cost voxel for each incomplete pixel pk and repeats the
4.1 Discrete Visual Hull Generation
process until all the pixels in Vk are labeled as complete. As shown
Given a set of reference images, denoted as {G k , Pk |k = 1 ∼ n}, our in Figure 5(right), this process results in a discrete visual hull that
system starts by back-projecting the 2D image, G k = {Vk , Ek }, to fulfills the completeness requirement. We include the pseudocode
3D using the camera projection parameters, Pk , and generating a set that details the above procedure in the supplementary document.
of generalized cones. To find a 3D volume that is compatible with G k
and Pk , we compute a bounding cube based on the intersection of 4.2 Connectivity Optimization
camera’s viewing frustums. We then discretize the cubic volume into
The discrete visual hull typically contains an excessive amount of
N × N × N uniform voxels x i ∈ R3 with N indicates the resolution
isolated components. These isolated components significantly in-
of voxelization. We define the relations between voxels and the
crease the difficulty in manufacturing a physical wire sculpture.
reference images as follows. Let x̃ ik ∈ R2 be the 2D projection of ith
Therefore, our goal in this step is to find an optimal set of 3D paths
voxel x i with respect to projection Pk for the k th view. We label a such that the isolated components are linked to form a single con-
image pixel pk ∈ Vk as complete if it falls within one of the circles nected component while minimizing the projection error caused
(with a radius of 2 pixels) centered at x̃ ik . We call a voxel consistent by inconsistent voxels that compose the 3D paths. We formulate
if it can establish the mapping (x̃ ik , Vk ) for all k views. the problem as a conventional minimum spanning tree problem
The reconstruction process starts by initializing the visual hull on a graph representing the isolated components and their spatial
with a set of consistent voxels. Not surprisingly, the projection of relationships.

ACM Transactions on Graphics, Vol. 37, No. 6, Article 242. Publication date: November 2018.
Multi-view Wire Art • 242:5

Components Constraint volume 3D pathfinding Visual hull proj. Thinning w/o D cont Thinning w/ D cont

Fig. 6. Constrained 3D pathfinding. (Left) Given two components in 3D Fig. 7. Continuity penalty. (Left) Projection from the reconstructed visual
(red and purple voxels), we first find their 2D conterparts in each view. hull. (middle) Applying volumetric thinning algorithm on the visual hull may
(Middle) We then back-project the 2D shortest paths along the line contours result in broken line segments. (Right) Incorporate the continuity panelty
in the projected views (blue pixels) and construct a constraint volume by D cont in the edge weights w i j help alleviate endpoints shrinkage artifacts.
taking the union of the generalized cones. (Right) 3D pathfinding algorithm
traces a sequence of voxels (blue voxels) to connect the two components.

Shape-aware edge weight. We define the edge weight w i j in the


component graph as follows:
Graph construction. First, we apply a flooding algorithm to con-
stituent voxels of discrete visual hull and generate a collection of n
1
D k (x̃ ik ) + D cont (X i , X j ).
Õ Õ
isolated components, denoted as X = {X 1 , ..., Xm }. Next, we con- wi j = (3)
|Pi j |
struct a component graph G = {X, E} by adding a graph edge ei j to ∀x i ∈ Pi j k =1
E if the shortest distance between two components X i and X j is be-
low a pre-defined threshold (15% of the edge length of the bounding Intuitively, the first term measures the projection error induced from
cube in our experiments). For each graph edge ei j , we employ the path Pi j with respect to the input images. Here, we introduce the
A∗ pathfinding algorithm with best-first-search strategy to trace a continuity penalty term, D cont (X i , X j ), to avoid endpoints shrinkage
3D path from X i to X j . Two voxels, x src ∈ X i and x dst ∈ X j , which effects due to volumetric thinning as follows:
define the shortest distance between X i and X j , represent respec-
∃k, s.t. X̃ ik ∩ X̃ ik , ∅

0,
tively the source and destination for 3D pathfinding. We define a D cont (X i , X j ) = , (4)
10, otherwise
heuristic function for guiding pathfinding as follows:
n D cont (X i , X j ) captures whether the 2D projections of two disjoint
D k (x̃ ik ) + 0.6∥x i − x dst ∥22 .
Õ
h(x i ) = (2) components X i and X j form a continuous line segment in the pro-
k=1 jective views. If so, we want to ensure a path connecting two such
components. This is achieved by adding a large constant value (10
The first term encourages selecting a voxel with smallest projection in Equation 4) to those components that do not present such a
error, while the second term regularizes the tracing direction toward relationship. Figure 7 shows the effect of this penalty term.
the destination voxel x dst . The resulting 3D path, composed of voxels
that connect the two components X i and X j , is denoted as Pi j . Optimization. We apply Kruskal’s algorithm to the component
graph G and obtain a minimum weight spanning tree. The final
Constrained 3D pathfinding. Note that a greedy pathfinding al- visual hull corresponds to the voxels that compose the resulting
gorithm that freely explores the volume space is computationally spanning tree.
expensive. Here, we exploit the mapping between the target visual
hull and the input images to construct a volumetric manifold to
5 3D CURVE EXTRACTION
which the searching space is restrained while maintaining the pro-
jection quality. Specifically, given a pair of components {X i , X j } and In this section, we describe our method for extracting smooth and
corresponding projections {X̃ ik , X̃ jk } by Pk , we compute a 2D short- continuous 3D curves from the reconstructed visual hull. To this
end, we first employ a volumetric thinning method [Liu et al. 2010]
est path from X̃ ik to X̃ jk on the k-th input image G k , and back-project to extract a shape- and topology-preserving curve skeleton from the
pixels in the shortest path to obtain a constraint volume, denoted visual hull. This curve skeleton is then divided into a set of skeleton
as Hi,k j . We define the constraint volume as Hi, j = k=1∼n Hi,k j ,
Ð
lines according to its topology. However, as the process of visual hull
where n is the number of input images. Intuitively, as the constraint reconstruction is performed locally and does not take into account
volume Hi,k j has projections along the contour on k th image, tracing the compactness of global structure, the extracted curve skeletons
path within the constraint volume thus does not incur additional may contain many redundant parts with complex geometry. This
projection errors while pruning out a large portion of the volume makes the physical realization process (e.g., 3D printing) extremely
space. We modify the pathfinding algorithm to consider only voxels challenging, or not even possible. To tackle this problem, we perform
in Hi, j when connecting components X i and X j and hence obtain a structure simplification on the curve skeleton that grounds on a
a significant boost in timing (Section 7.2). Figure 6 illustrates the novel quality measurement tailor-made for our context to strike a
proposed constrained 3D pathfinding approach. balance between projection error and structure compactness.

ACM Transactions on Graphics, Vol. 37, No. 6, Article 242. Publication date: November 2018.
242:6 • Hsiao, Huang, and Chu

Curve skeleton Naïve curve fitting Curve deformation (a) Input skeleton (b) User-defined strokes (c) Updated result
Fig. 8. Effect of image-guided 3D curve deformation. Given the ex-
tracted curve skeleton (left), using Naïve curve fitting may produce para-
metric curves that do not respect the given contours at the projected views
(middle). The proposed image-guided 3D curve deformation step help reduce
the distortion (right). Before repair After repair Before simplify After simplify

Fig. 9. User editing example. (a) The automatic structure simplification


step may produce over-simplified skeleton (purple box) and under-simplified
5.1 Quality Measurement (light blue box) result. (b) The user can locally alter the result by specifying
Given a curve skeleton denoted as L = {L 1 , ..., Lm }, where Li rep- either repair strokes or simplify strokes on a particular input shape. (c) Given
resents the skeleton line that is composed of a sequence of skeleton the user inputs, our system automatically updates the structure of associ-
points in 3D. We propose to measure the quality of L as: ated skeleton lines to obtain a new result that respects user’s intention at
interative rate (see supplementary material for examples).
E quality (L) = E proj (L) + αE struct (L), (5)

where E proj and E struct measure the bidirectional projection error and Structure compactness. The structure compactness energy term,
structure compactness of L, respectively and the parameter α is used on the other hand, aims to capture the complexity of curve skeleton
to control the relative weights between the two energy terms. in terms of its 3D structure and 2D counterparts in the projected
views. We define the energy function as:
Bidirectional projection error. The bidirectional projection error
aims to measure the shape similarity between the curve skeleton E struct (L) = E redundancy (L) + βE complexity (L). (9)
and input image in the projected view. Specifically, we propose to The first term E redundancy measures the redundancy of mapping
estimate (i) the deviation, indicating how much the projection of from curve skeleton to input images by counting how many points
curve skeleton deviates from the input image; and (ii) the incom-
on the skeleton line Li ∈ L map to the same foreground pixel pk ∈
pleteness, indicating how much portion of the input image is not
Vk . The second energy function E complex (L) counts the number
recovered by the projection of curve skeleton. Note that we render
of skeleton lines in L. The parameter β is used to trade off two
the skeleton lines on the projected view using a circle shape with
energy functions. We empirically set α = 0.4, β = 0.38, w k = 2 in
radius of 1 pixel width. We denote the foreground pixels of input
all experiments.
image as Vk and projection of curve skeleton as L˜k . The projection
error E proj is of the form: 5.2 Curve Structure Simplification.

n Given the definition of quality measurement on the curve skeleton,
E proj (L) = E dev (L̃k , Vk ) + w k E incomp (L̃k , Vk ), (6) our goal here is to determine a subset of skeleton lines L  ⊂ L so
k=1 that we can minimize E quality (L  ) while maintaining connectivity
of the selected subset L  . This amounts to a binary labeling problem
with
1  over a set of skeleton lines, and thus existing global optimization
E dev (L̃k , Vk ) = D k (pk ), (7) methods are intractable. We implement a greedy approach to achieve
| L̃k | approximate local minimum by iteratively removing skeleton lines
∀pk ∈ L̃k
using a priority queue. We leave the implementation details in the
and
1  supplementary document.
E incomp (L̃k , Vk ) = D L̃ (pk ). (8)
|Vk |
∀pk ∈Vk
k 5.3 3D Curve Fitting and Deformation
Given the curve skeleton, we can easily obtain a smooth and con-
Two functions, D k and D L̃ , represent the distance transform maps tinuous wire sculpture by fitting the individual skeleton lines with
k
computed by input Vk and the projected imageL˜k , respectively. parametric curve using cubic splines. However, a naïve curve fitting
The parameter w k controls the relative importance between energy often leads to irregular curve shapes inherited from the zigzag fea-
functions with respect to each input image, and can be utilized to tures of reference skeleton lines (see Figure 8 (left, middle)). To avoid
support flexible user controls (Section 6). these artifacts, we devise a curve deformation model that iteratively

ACM Transactions on Graphics, Vol. 37, No. 6, Article 242. Publication date: November 2018.
Multi-view Wire Art • 242:7

First view Int. view Second view

Fig. 10. Reproducing artist’s results. Using artist’s input of ele- Fig. 11. 3D printouts. Two 3D printouts of multi-view wire sculptures
phant/giraffe in Figure 2, our results show visually similar 2D projections as generated by our system using FDM (middle) and SLM (right) process.
the input at both viewpoints. The intermediate view, however, reveals that
our 3D wire sculpture has more complex structure than that from the artist
(see Figure 2).
Reproducing artist’s results. We apply our method to reproduce
the elephant/giraffe example. As shown in Figure 10, our 3D wire
moves the control points on the curves toward the directions guided sculpture is as accurate as artwork in the projective views, but has
by the spatial relations between projected control points and input slightly more complex structure than that by the artist (see Figure 2).
images. We show the effect of curve deformation in Figure 8 and Augmenting the inputs to three views (see Figure 12, 1st row) further
refer the readers to the supplementary material for details. demonstrates that our system can produce 3D wire sculptures that
go beyond the complexity of existing pieces of art.
6 USER CONTROL
3D printout. To evaluate the physical realizability of the resulting
While our method can produce a 3D wire sculpture that best ap- 3D wire sculptures, we take two examples from our gallery and
proximates the given line drawings at the specified view in fully print out the models using industry-level 3D printing machines
automatic fashion, the users may want to modify and refine the with different levels of resolutions. Specifically, the 3-transportation
sculpture to enhance artistic effects of the structure or fix errors sculpture (Figure 12) was printed by a machine with 100-micron
produced by our system. Here, we introduce two types of 2D strokes: accuracy using FDM (fused deposition modeling) technique with
(i) repair strokes and (ii) simplify strokes to support interactive user PLA (polylactic acid). The 3-bird sculpture (Figure 4) was printed
editing on the wire sculpture. The user simply needs to specify by a machine with 30-micron accuracy using SLM (selective laser
2D strokes on the image of the wire sculpture projected onto the melting) technique with metallic powders (CoCrMo alloy). Figure 11
k-th input image, while our system will automatically update the shows the 3D printouts along with their physical exhibition. Note
local structure by (i) collecting skeleton lines that intersect with that the 3D printout may inevitably suffer from distortion due to the
user strokes in the projected view; and (ii) re-running the structure thin and curve shape of wire structure, and hence introduce more
simplification process on the selected skeleton lines with updated projection errors than its digital version.
parameters. For the repair strokes, we increase the parameter w k
in Equation 6 from 2 to 8 to encourage the completeness in the
7.2 Evaluation
k-th input image. For the simplify strokes, we double the parameter
α in Equation 5 to amplify the structure compactness during the In this section, we evaluate and characterize the performance of
simplification. We demonstrate the effect of user editing in Figure 9. our method over several important design choices. The source code,
input line drawings, and pre-computed wire sculptures will be made
7 EXPERIMENTAL RESULTS publicly available to stimulate further research.
7.1 Visual results Dataset and evaluation metrics. In addition to the triple-image
We evaluate our method on a wide variety of input line drawing examples shown in the paper, where the input images are from the
images with varying complexity. We implement two types of exhibi- same category and have similar complexity and style, we further
tion modes: (i) orthogonal mode and (ii) in-plane mode. The former prepared an extra set of 21 triple-image cases by randomly picking
one arranges the virtual cameras in a mutually orthogonal setting, three images out of a large set of 45 input line drawing images with
while the latter places all the cameras on the same plane with vary- different styles and varying complexity.
ing viewing angles. We automatically generated 20 multi-view wire We use two quantitative metrics to evaluate the quality of gener-
sculptures in total. Figure 12 shows a gallery of the generated wire ated 3D wire sculptures. Our first metric measures the projection
sculptures exhibited in orthogonal mode. Examples of in-plane exhi- errors: E dev + Eincomp (Section 5.1). The project errors, however,
bition mode can be found in Figure 1 (middle, right). In addition to may not reflect the quality well due to its sensitivity to outliers.
static exhibition, the best way to demonstrate such a unique sculp- Our second metric measures the projection accuracy by mean aver-
ture art is through a dynamic setting, where the 3D model, camera age precision (mAP). Specifically, we vary the distance threshold
poses, and lighting are animated and working together to depict ranging from 0 (precise matching) to 10 pixels (coarse matching),
different input images over time. Please refer to supplementary compute the precision and recall at each step, and compute the mAP
material for more static and dynamic examples. value across all images.

ACM Transactions on Graphics, Vol. 37, No. 6, Article 242. Publication date: November 2018.
242:8 • Hsiao, Huang, and Chu

Our results Input binary images Our results

Fig. 12. Visual results of multi-view wire art. We present several examples of input line drawings with varying complexity. Given a set of three input
images (middle), our system automatically generates 3D wires that exhibits three distinct line drawings when perceived from three orthogonal view points
(left, right). Our method handles inputs with varying complexity. More results can be found in the supplementary material.

ACM Transactions on Graphics, Vol. 37, No. 6, Article 242. Publication date: November 2018.
Multi-view Wire Art • 242:9

643 1283 2563 5123 Inputs 30 degrees 60 degrees 90 degrees

(a) Projection error (b) Mean average precision (a) Projection error (b) Mean average precision

Fig. 13. Effect of voxelization resolution. (Top) Increasing voxel resolu- Fig. 15. Effect of angle intervals. When placing all the three view points
tion helps resolve the intricate details of the inputs in the projected images. on the same plane, the reprojection accuracy depends heavily on the selected
(Bottom) Quantitative results in terms of (a) bidirectional projection er- angle intervals. The similarity between the respective projections and the
ror and (b) mean average precision. The error bars indicate the standard input images degrades significantly when angle intervals are small (because
deviation computed from the set of examples in the dataset. it is difficult to optimize the 3D wire with dramatically different projections
under a small view point shift) or close to 90 degrees (because the first and
the third views will be located at opposite viewing directions and create
Resolution of voxelization. As the initial geometry and structure conflicting contours).
of 3D curve skeleton are determined by the reconstructed visual
hull, voxel resolution plays a critical role in recovering the details
of the visual hull, particularly due to the intricate details presented Effect of parameter α. In Section 5.1, we use a parameter α to
in the line drawings. We characterize the performance over voxel control the relative importance between the projection error and
resolutions N 3 = [5123 , 2563 , 1283 , 643 ] in orthogonal mode. Fig- structure compactness during the structure simplification process.
ure 13 shows a visual example as well as the quantitative results. Figure 14 shows the effect of α by generating curve skeleton with
Our results validate the intuition that increasing voxel resolution varying α. Setting α to either small value (0.1) or large value (0.9)
allows us to capture finer details in the inputs. However, this comes may produce over-complex or over-simplified results. Our default
at the cost of memory and computational complexity. We believe setting with α = 0.4 (fixed throughout all of our experiments) strikes
that using an octree-based method can better address the trade-off a balance between projection error and structure compactness.
than the uniform voxelization in this work.
Viewing angles. In the in-plane mode with three images, we inves-
tigate the effect of angle interval and report the evaluation results
in Figure 15. In the case of three input drawings, placing the view-
points at an angle interval of 60 degrees achieves the best visual
and quantitative results.

α = 0.1 α = 0.4 (default) α = 0.9 Q metric = 0.046 Q metric = 0.225

Fig. 14. Trade-off between structure compactness and projection er-


ror. Setting parameter α in Equation 5 to a small value (left) encourages
extracting curves that align the line drawings well, but suffers from complex Fig. 16. Quality indicator. (Left) Scatter plot for the 21 cases. (Right) Two
wire structure. On the other hand, setting a large value of α (right) produce examples of lowest and highest value of Q metric . The voxels of initial visual
simple wire structures with missing contour at projected views. We empiri- hull and expanded visual hull are colored in blue and red, respectively. Our
cally set α = 0.4 (fixed for all the experiments shown in the paper) to strike quality indicator correlates well with the final projection errors of the wire
a balance between projection error and structure compactness. sculptures.

ACM Transactions on Graphics, Vol. 37, No. 6, Article 242. Publication date: November 2018.
242:10
ShadowArt [2009]
Our results • Hsiao, Huang, and Chu

Voxels 2D projections Voxels 2D projections

Fig. 17. Comparision with Shadow Art. (Top) The results of Shadow Art [Mitra and Pauly 2009] using input images from Figure 12 (7th row) and Figure 1
(left). (Bottom) The generated visual hull from our method using the same input images. The results from Shadow Art suffer numerous disjoint components in
the reconstructed voxel structures and severely distored 2D projections that deviate from input line drawings (highlighted in red) due to image deformation.

Quality indicator. To evaluate how well the quality of the wire resultant voxel structures generated by [Mitra and Pauly 2009] may
sculpture relates to the initial discrete visual hull, we propose a qual- contain many disjoint components due to the highly inconsistent
ity indicator metric, Q metric = |V H initial |/|V H expanded |, which com- nature of input line drawings. Moreover, a severely distorted pro-
putes the ratio between the number of voxels in the initial visual hull jection is often inevitable due to an image deformation approach
V H initial and that in the expanded visual hull V H expanded (see Fig- for eliminating inconsistent voxels. Our system, in contrast, can
ure 5). Higher values of Q metric indicate more triple-consistent vox- generate voxel structures with one single connected component
els in the initial visual hull. For the 21 test cases used for evaluation, without introducing significant projection errors. Please refer to the
we found that the ratio correlates well with the projection errors supplementary material for a complete set of comparison.
with a Spearman’s rank correlation coefficient = -0.818. Figure 16
shows the scatter plot of all 21 test cases. As computing the ratio 7.3 Limitations
Q metric is efficient (< 1 minute), we can use it as quick test for qual- Our method has limitations in the following aspects.
ity, which allows us to filter out challenging cases that may lead to
failures. Results. Surprisingly, we find that the our method does not per-
form well on simple input images. Figure 18 shows an example in
Timing analysis. Here we report the timing statistics of running orthogonal mode. In this case, our method produces clearly visible
our non-optimized method on the dataset using a moderate PC with artifacts due to the difficulty in resolving inconsistency from sim-
an Intel Core i7 6700K (3.4GHz) and 32GB memory. The overall time ple contours. Our approach also has difficulty in resolving dense
complexity depends on the voxel resolution and the complexity of line strokes (e.g., hairs) in the projected image due to limited voxel
input images. For example, the average running time for simple resolution.
images (e.g., 3rd row in Figure 12) and complex images (e.g., 1st row
in Figure 12) with resolution 5123 are 83 and 510 minutes, respec- Speed. Our current implementation may take up to several hours
tively. Decreasing the resolution to 2563 will yield a 7X speedup. to construct a multi-view wire sculpture (depending on the chosen
The major computational bottleneck lies in the connectivity opti- voxel resolution and input line drawing complexity). Once the initial
mization step in Section 4.2 (73% of the total running time). Using the
proposed constrained 3D pathfinding algorithm provides more than
2X speedup. The structure simplification (Section 5.2) that occupies
27% of running time mainly depends on the complexity of input
images. The average running time for generating discrete visual
hull (Section 4.1) and curve fitting and deformation (Section 5.3)
took about 1 − 3 seconds and is thus negligible. Employing a hierar-
chical multi-scale implementation and parallelizing the sequential
Inputs Our results
algorithm of 3D pathfinding can further speed up the system.
Comparision with Shadow Art. We ran the code of [Mitra and Fig. 18. Limitations. Our system fails to delineate clean contours with very
Pauly 2009] on 11 three-view examples. As shown in Figure 17, the simple shapes due to large inconsistency between input images.

ACM Transactions on Graphics, Vol. 37, No. 6, Article 242. Publication date: November 2018.
Multi-view Wire Art • 242:11

wire sculpture is constructed, the user editing (Section 6) can run at Michael Goesele, Noah Snavely, Brian Curless, Hugues Hoppe, and Steven M Seitz.
interactive rate. 2007. Multi-view stereo for community photo collections. In ICCV.
Manuel Hofer, Michael Maurer, and Horst Bischof. 2017. Efficient 3D scene abstraction
using line segments. Computer Vision and Image Understanding 157 (2017), 167–178.
Optimization. While our results validate the capability of our Po-Han Huang, Kevin Matzen, Johannes Kopf, Narendra Ahuja, and Jia-Bin Huang.
algorithm for generating multi-view wire sculptures, the proposed 2018. DeepMVS: Learning Multi-view Stereopsis. In CVPR.
method involves on several heuristic (and therefore brittle) opti- Emmanuel Iarussi, Wilmot Li, and Adrien Bousseau. 2015. WrapIt: computer-assisted
crafting of wire wrapped jewelry. ACM Trans. Graph. (Proc. of SIGGRAPH Asia) 34,
mization steps. Formulating the problem as a more principled global 6 (2015), 221.
optimization framework is an important future direction. Jeroen Keiren, Freek van Walderveen, and Alexander Wolff. 2009. Constructability of
trip-lets. In 25th European Workshop on Comp. Geom.
Fabrication. Our system does not incorporate any physical simu- Andreas Kuhn, Heiko Hirschmüller, Daniel Scharstein, and Helmut Mayer. 2017. A
tv prior for high-quality scalable multi-view stereo reconstruction. International
lations nor impose the single wire assumption as in [Liu et al. 2017]. Journal of Computer Vision 124, 1 (2017), 2–17.
Consequently, we cannot guarantee the generated wire sculptures Ying-Miao Kuo, Hung-Kuo Chu, Ming-Te Chi, Ruen-Rone Lee, and Tong-Yee Lee. 2016.
are 3D printable. Possible ways to achieve fast and inexpensive 3D Generating Ambiguous Figure-Ground Images. IEEE Trans. Vis. Comput. Graph. PP
(2016). Issue 99.
printing include considering the constraints of the wire-bending A. Laurentini. 1994. The Visual Hull Concept for Silhouette-Based Image Understanding.
machine in the modeling process [Miguel et al. 2016], or decompos- IEEE Trans. Pattern Analysis Machine Intelligence 16, 2 (1994), 150–162.
Svetlana Lazebnik, Yasutaka Furukawa, and Jean Ponce. 2007. Projective visual hulls.
ing the wire sculpture into a set of single wires and fabricating each International Journal of Computer Vision 74, 2 (2007), 137–165.
wire separately. Lingjie Liu, Duygu Ceylan, Cheng Lin, Wenping Wang, and Niloy J. Mitra. 2017. Image-
based Reconstruction of Wire Art. ACM Trans. Graph. (Proc. of SIGGRAPH) 36, 4
8 CONCLUSIONS (2017), 63:1–63:11.
L Liu, Erin W Chambers, David Letscher, and Tao Ju. 2010. A simple and robust thinning
We have presented a method for enabling novice users to construct algorithm on cell complexes. In Comput. Graph. Forum (Proc. EUROGRAPHICS),
Vol. 29. 2253–2260.
multi-view wire sculptures. The core idea of our approach lies in Wojciech Matusik, Chris Buehler, Ramesh Raskar, Steven J Gortler, and Leonard McMil-
reconstructing visual hull and extracting curve skeletons to balance lan. 2000. Image-based visual hulls. In Proc. of ACM SIGGRAPH.
projection error and spatial compactness. Through extensive evalua- Eder Miguel, Mathias Lepoutre, and Bernd Bickel. 2016. Computational Design of
Stable Planar-rod Structures. ACM Trans. Graph. (Proc. of SIGGRAPH) 35, 4, Article
tions in both simulation and 3D physical printout, we show that our 86 (2016), 11 pages.
method can produce smooth and compact wire sculptures exhibiting Sehee Min, Jaedong Lee, Jungdam Won, and Jehee Lee. 2017. Soft shadow art. In Proc.
of the Symposium on Computational Aesthetics. 3.
multiple prescribed images with minimum distortions. We believe Niloy J. Mitra and Mark Pauly. 2009. Shadow Art. ACM Trans. Graph. (Proc. of SIGGRAPH
that our framework can help democratize this unique art form and Asia) 28, 5 (2009), 156:1–156:7.
enable artists, designers, and hobbyists to create their own multi- Aude Oliva, Antonio Torralba, and Philippe G. Schyns. 2006. Hybrid Images. ACM
Trans. Graph. (Proc. of SIGGRAPH) 25, 3 (2006), 527–532.
view wire sculptures. We see several interesting future directions. Luke Olsen, Faramarz F Samavati, Mario Costa Sousa, and Joaquim A Jorge. 2009.
This work focuses on creating wire sculptures with one connected Sketch-based modeling: A survey. Computers & Graphics 33, 1 (2009), 85–103.
component. Creating sculptures with multiple parts may provide Jim Postell. 2012. Furniture design. John Wiley & Sons.
Alec Rivers, Frédo Durand, and Takeo Igarashi. 2010. 3D Modeling with Silhouettes.
more complex interactions among parts and offer richer viewing ex- ACM Trans. Graph. (Proc. of SIGGRAPH) 29, 4, Article 109 (2010), 8 pages.
periences. Our current approach resolves the inconsistency among Johannes L Schonberger and Jan-Michael Frahm. 2016. Structure-from-motion revisited.
In CVPR.
input line drawings through minimizing the projection errors based Johannes L Schönberger, Enliang Zheng, Jan-Michael Frahm, and Marc Pollefeys. 2016.
on the original input images. An semantic-aware method may help Pixelwise view selection for unstructured multi-view stereo. In ECCV.
significantly improve the visual quality by favoring projected lines Wire Sculpture. 2007. https://en.wikipedia.org/wiki/Wire_sculpture. (2007).
Guy Sela and Gershon Elber. 2007. Generation of view dependent models using free
that are plausible in the input line drawings. form deformation. The Visual Computer 23, 3 (2007), 219–229.
Noah Snavely, Steven M Seitz, and Richard Szeliski. 2006. Photo tourism: exploring
ACKNOWLEDGMENTS photo collections in 3D. In ACM Trans. Graph. (Proc. of SIGGRAPH), Vol. 25. 835–846.
Richard Szeliski. 1993. Rapid octree construction from image sequences. CVGIP: Image
We are grateful to the anonymous reviewers for their comments and understanding 58, 1 (1993), 23–32.
suggestions. We also like to thank National Applied Research Labo- Anil Usumezbas, Ricardo Fabbri, and Benjamin B Kimia. 2016. From multiview image
curves to 3D drawings. In ECCV.
ratories and Xian-Guang Zhong for helping on the 3D printing. The WigJig. 2015. https://www.wigjig.com/. (2015).
work is supported in part by the Ministry of Science and Technology Jungdam Won and Jehee Lee. 2016. Shadow theatre: discovering human motion from a
sequence of silhouettes. ACM Trans. Graph. (Proc. of SIGGRAPH) 35, 4 (2016), 147.
of Taiwan (107-2218-E-007-047- and 107-2221-E-007-088-MY3). Jonas Zehnder, Stelian Coros, and Bernhard Thomaszewski. 2016. Designing
Structurally-sound Ornamental Curve Networks. ACM Trans. Graph. (Proc. of
REFERENCES SIGGRAPH) 35, 4, Article 99 (2016), 10 pages.
Marc Alexa and Wojciech Matusik. 2010. Reliefs as images. ACM Trans. Graph. (Proc.
of SIGGRAPH) 29, 4 (2010), 60–1.
Amit Bermano, Ilya Baran, Marc Alexa, and Wojciech Matusk. 2012. Shadowpix: Multi-
ple images from self shadowing. In Comput. Graph. Forum (Proc. EUROGRAPHICS),
Vol. 31. 593–602.
Robert T Collins. 1996. A space-sweep approach to true multi-image matching. In
Computer Vision and Pattern Recognition, 1996. Proceedings CVPR’96, 1996 IEEE
Computer Society Conference on. IEEE, 358–363.
Ricardo Fabbri and Benjamin Kimia. 2010. 3D curve sketch: Flexible curve-based stereo
reconstruction and calibration. In CVPR.
Yasutaka Furukawa and Carlos Hernández. 2015. Multi-view stereo: A tutorial. Foun-
dations and Trends® in Computer Graphics and Vision 9, 1-2 (2015), 1–148.
Yasutaka Furukawa and Jean Ponce. 2010. Accurate, dense, and robust multiview
stereopsis. IEEE Trans. Pattern Analysis Machine Intelligence 32, 8 (2010), 1362–1376.

ACM Transactions on Graphics, Vol. 37, No. 6, Article 242. Publication date: November 2018.

You might also like