Watercolor
Watercolor
Watercolor
Figure 1: Various watercolor-like images obtained either from a 3d model (a,b) or from a photograph (c) in the same pipeline.
Abstract
This paper presents an interactive watercolor rendering technique
that recreates the specific visual effects of lavis watercolor. Our
method allows the user to easily process images and 3d models and
is organized in two steps: an abstraction step that recreates the uni- (a) (b) (c)
form color regions of watercolor and an effect step that filters the
resulting abstracted image to obtain watercolor-like images. In the c
Figure 2: Real watercolor (
Pepin van Roojen). (a) Edge dark-
case of 3d environments we also propose two methods to produce ening and wobbling effect, (b) pigments density variation, (c) dry
temporally coherent animations that keep a uniform pigment repar- brush.
tition while avoiding the shower door effect.
Keywords: Non-photorealistic rendering, watercolor, temporal rather than on a physically-based simulation of the underlying pro-
coherence, abstraction. cesses. To this end, we focus on what we believe to be the most
significant watercolor effects, and describe a pipeline where each
of these effects can be controlled independently, intuitively and in-
1 Introduction teractively.
Our goal is the production of watercolor renderings either from
Watercolor offers a very rich medium for graphical expression. As images or 3d models, static or animated. In the case of animation
such, it is used in a variety of applications including illustration, rendering, temporal coherence of the rendered effects must be en-
image processing and animation. The salient features of watercolor sured to avoid unwanted flickering and other annoyances. We de-
images, such as the brilliant colors, the subtle variation of color scribe two methods to address this well-known problem that differ
saturation and the visibility and texture of the underlying paper, are in the compromises they make between 2d and 3d.
the result of a complex interaction between water, pigments and the In the following, we will first review the visual effects that occur
support medium. with traditional watercolor and present related work. Our pipeline is
In this paper, we present a set of tools for allowing the creation then described for images and fixed viewpoint 3d models, followed
of watercolor-like pictures and animations. Our emphasis is on the by our methods that address the temporal coherence of animated 3d
development of intuitive controls, placed in the hands of the artists, models. Finally, we will show some results before concluding.
ARTIS is a team of the GRAVIR/IMAG research lab (UMR C5527
Figure 3: Static pipeline: Our pipeline takes as input either a 3d model or a photograph, the input is then processed in order to obtain an
abstracted image and finally watercolor effects are applied to produce a watercolor-like image.
The main effect in watercolor is the color variation that occurs in 4.1.2 Pigment density variation
uniformly painted regions. We first present the technique we use to
reproduce these variations by modifying the color of the abstracted As described in section 2 the density of pigments varies in several
image and then we describe all the effects specific to watercolor. ways. We propose to add three layers, one for each effect: turbulent
flow, pigment dispersion and paper variations. No physical simula-
tion is performed. Instead, each layer is a gray-level image whose
4.1.1 Color modification intensity gives the pigment density as follows: An image intensity
of T [0, 1] yields a density d = 1 + (T 0.5), which is used to
modify the original color using formula 1. is a global scaling fac-
C tor used to scale the image texture values and allow arbitrary density
d= 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 modifications. The three effects are applied in sequence and each
can be controlled independently (see Figure 4-(g,h,i)). The com-
Figure 5: Darkening and lighting of a base color C (here putation is done per pixel using a pixel shader. The paper layer is
(0.90, 0.35, 0.12)) using a density parameter d. applied on the whole image whereas the other layers are applied
only on the object projection.
The user is free to choose any kind of gray-level textures for
As mentioned earlier we chose not to use a physical model to
each layer. We found it convenient to use Perlin noise textures
describe color combinations. The essential positive consequence of
[Perlin 1985] for the turbulent flow, a sum of gaussian noises at
this choice is that the user can freely choose a base color (think of
various scales for pigment dispersion and scanned papers for the
a virtual pigment), whose variation in the painted layer will consist
paper layer.
of darkening and lightening, as shown in Figure 5.
We build an empirical model based on the intuitive notion that
the effects due to pigment density can be thought of as resulting 4.1.3 Other watercolor effects
from light interaction with a varying number of semi-transparent
layers. Consider first a color C (0 < C < 1) obtained by a uniform The pigment density treatment recreates the traditional texture of
layer of pigment over a white paper, we can interpret this as real watercolor. We then add, as in most previous techniques, sev-
a sub- eral typical watercolor effects:
tractive process where the transmittance of the layer is = C (the
observed color is the transmittance squared since light must travel
through the layer twice). Adding a second layer of the same pig- Edge darkening: Edges are darkened using the gradient of the
ment therefore amounts to squaring the transmittance, and hence abstracted image (see Figure 4-(f)). The gradient is computed on
also the observed color. the GPU using a fast, symmetric kernel:
We need to be able to continuously modify the color, rather
than proceeding with discrete layers or squaring operations. A (px,y ) = |px1,y px+1,y | + |px,y1 px,y+1 |
full, physically based analysis could be based on the description
of a continuous pigment layer thickness and the resulting exponen- The gradient intensity (used to modify the pigment density using
tial attenuation. Rather we obtain similar phenomenological effects formula 1) is computed by averaging the gradient of each color
by introducing a pigment density parameter d, which provides the channel.
ability to continuously reinforce or attenuate the pigment density, Any other gradient computation could be used depending on the
where d = 1 is the nominal density corresponding to the base color compromise between efficiency and realism needed by the user.
C chosen by the user. We then modify the color based on d by re- Our method is clearly on the efficiency side.
moving a value proportional to the relative increase (respectively
decrease) in density and the original light attenuation 1 C. The Wobbling: The abstracted image is distorted to mimic the wob-
modified color C0 for density d is thus given by bling effect along edges due to the paper granularity. The x offset is
computed with the horizontal paper gradient, and the y offset with
the vertical paper gradient (see Figure 4-(e)). The user can change
C0 = C(1 (1 C)(d 1)) (1) the paper texture resolution: Indeed, the real process involves com-
= C (C C2 )(d 1) plex interactions and using the paper texture directly may produce
details at too fine a scale. By decreasing the resolution we keep the
Note that the value d = 2 corresponds to the two-layer case dis- overall canvas structure while decreasing the wobbling frequency.
cussed above and to a squared color. We could have also used another texture for this effect as in [Chu
(a) (b) (c) (d) (e)
Figure 4: Static pipeline illustrated for a 3d model: (a) 3d model, (b) original color chosen by the user, (c) toon shading, (d) dry brush (not
kept for this example), (e) wobbling, (f) edge darkening, (g) pigment dispersion layer, (h) turbulence flow layer, (i) paper, (j) final result.
and Tai 2005]. As a side effect, the paper offset adds noise along that it stays consistent with the rest of the pipeline. The user thus
edges, which decreases the aliasing effect. simply has to provide a gray-level 1d texture.
(c) (d)
Figure 8: Image abstraction: (a) original image, (b) without abstraction: direct application of watercolor effects, the image is too detailed, (c)
result after segmentation, (d) final result with segmentation and morphological smoothing.
propose a color segmentation step. We use a mean shift algorithm example, the features of a pigment dispersion texture that appears
[Comaniciu and Meer 1999] to perform such a segmentation. This correct from a specific camera position may blur if the camera is
gives a less detailed image as presented Figure 8-(c) showing the zoomed out. Therefore, we seek a compromise between the 2d lo-
uniform color regions typical in watercolor. cation of pigments and the 3d movements of objects.
This question has already been addressed in the case of the can-
4.2.3 Morphological Smoothing vas, leading to two approaches of a dynamic canvas [Cunzi et al.
2003; Kaplan and Cohen 2005]. The problem there was to match
After the previous step, either segmentation or illumination smooth- the canvas motion to the camera motion. In the watercolor case,
ing, some small color regions remain. We allow the user to apply a we want the pigment textures to follow each objects motion, not
morphological smoothing filter (sequence of one opening and one just the camera. We thus propose two different methods that each
closing) to reduce color areas details. The size of the morphologi- extend one of the previous dynamic canvas approaches.
cal kernel defines the abstraction level of color areas and silhouettes As far as the abstraction steps are concerned, the toon shader and
(Figure 8-(d)). It can be viewed as the brush size. The same kind of normal smoothing are intrinsically coherent since they are com-
2d approach is used by [Luft and Deussen 2005], except that they puted in object space. On the other hand, the morphological fil-
use a Gaussian filter and thresholds to abstract their color layers. ter is not coherent from frame to frame, since it is computed in
image space. Solving this question would probably need to fol-
low color regions along the animation as in video tooning [Wang
5 Temporal coherence et al. 2004]. Such a method would imply the loss of interactivity
and thus we have preferred to concentrate here on the watercolor
In the static version of our watercolor pipeline, we have shown that effects part of the pipeline and leave the incoherency of the mor-
a convincing watercolor simulation may be created by using texture phological smoothing for future work.
maps that represent both low frequency turbulent flow and high fre-
quency pigment dispersion. Using this method in animation leads
to the shower door effect, where the objects slide over the pig- 5.1 Attaching multiple pigment textures in object
ments textures. Ideally, we would like the pigment effects to move space
coherently with each object, rather than existing independently.
While it is tempting to simply map pigments textures onto each Our first method takes inspiration both from Meiers work on
object (as in [Lum and Ma 2001] for example), this leads to prob- painterly rendering [Meier 1996] and Cunzi et al.s work on a dy-
lems with ensuring the scale and distribution of noise features. For namic canvas [Cunzi et al. 2003].
(a) (b) (c)
Figure 9: (a,b) Our particle method illustrated with a chessboard texture. (a) Rotation: 2d textures follow 3d particles (black dots), (b)
z-translation: An infinite zoom of the texture maintains a constant frequency. (c) Result when using our pigment and paper textures.
Figure 11: (a) Progressive levels of noise generated interactively with increasing frequencies. (b) Compositing noise levels from (a) yields a
turbulence texture or (c) a pigment texture - using only high frequency noise textures. (d) The alpha texture used to draw the features, (e,f)
Notice how the pigments and turbulence features follow the surface. No paper texture is used for these images.
barycentric coordinates. This creates a noise texture with average of the turbulence appears consistent. In order to maintain the fre-
feature distribution of frequency f . quency distribution, one feature is moved between each noise tex-
A turbulence texture may be created by summing a series of such ture levels at a time, linearly scaling the width between levels. With
noise textures - a typical series of frequencies might be, for base this method, popping artifacts occur only at the highest and lowest
frequency F0 = f , F1 = f /2, F2 = f /4 FK = f /2K - by blending frequencies. At high frequencies, the artifacts were very tiny and
them with the contribution of each N j being defined by the har- so, not very noticeable. Low frequency errors can be avoided by
monic function 1/ f . Examples are shown in Figure 11. making the lowest frequency larger than the screen.
As a final consideration, the surface meshes in the scene may be
composed of triangles that occupy screen space of less than f f
5.2.1 Minimizing Artifacts pixels, in which case no face will draw any features of the desired
frequency! We create a hierarchy of faces, grouping faces into su-
This method yields an average frequency distribution of f rather perfaces as an octree. We then analyze the screen space occupied
than an exact distribution. Two problems may occur because of by the superfaces in order to determine the number of features to
this random placement. First, we are not guaranteed to completely draw at low frequencies. Only normal similarity is considered when
cover the image space. Attempting to order the spacing of the parti- creating our hierarchies, and though better schemes may exist for
cle placement may not be useful since we cannot predict what type grouping faces, this method works well in practice.
of perspective deformation the triangle will undergo. Therefore, the
screen color is initially cleared to a flat middle gray; features then
color the screen as an offset from a strict average rather than defin- 6 Results and Discussion
ing the actual screen color themselves as in standard noise. This
may overly weight the screen to gray, but if our random distribution We show some results obtained by our method using
of feature locations is good (see [Turk 1990] for a discussion), then as input either a photograph or a 3d model Figure 12.
this problem will be minimal and in practice, this seems to be true. More watercolor results and some videos can be found in
Second, we may create hotspots of feature placement when features artis.imag.fr/Membres/Joelle.Thollot/Watercolor/.
overlap. Again, a good initial distribution of features will mini- As most of the work is done by the GPU, the static pipeline runs
mize this problem, but in practice, the blending function is smooth at a speed similar to a Gouraud shading (between 25 frames per
enough that this issue is not apparent. second for 25k triangles downto 3 frames per second for 160k
The amount of screen space occupied by each surface face i triangles) for a 750 570 viewport on a Pentium IV 3GHz with a
changes with camera motion. Because the number of features GeForce FX 6800.
drawn is dependent on this value, new features may come into and Evaluating the visual quality of watercolor renderings is not an
out of existence very quickly which will cause popping artifacts if easy task. Figure 13 shows a comparison we have made with Curtis
not handled with care. We minimize this artifact by taking advan- original method [Curtis et al. 1997] that was a full physical sim-
tage of the fact that our turbulence texture is the sum of a series ulation. Our method allows us to produce images that stay closer
of noise textures. Continuity is maintained by moving features be- to the original photo while allowing the user to vary the obtained
tween each noise texture N j and textures N j+1 and N j1 . When i style. On the other hand Curtis result benefits from his simulation
grows larger, features are moved from N j1 to N j proportional to by showing interesting dispersion effects. However, it is precisely
the amount of screen space gained. Conversely, when i shrinks, these dispersion effects that make that method unsuitable for ani-
features are moved from N j+1 to N j and from N j to N j1 . Because mation. In light of this, it is unclear whether we lose anything by
features are transitioned between noise textures at different frequen- not doing a full fluid flow simulation in an animated or interactive
cies an ordering between noise texture levels is created. Therefore, environment.
the turbulence appears consistent at different resolutions, i.e. if an The temporal coherence effects of our two methods yield distinct
object is zoomed away from the viewer, the low frequency noise results with several tradeoffs. The texture attachment method yields
is transitioned to high frequency noise so the visual characteristics good results for a single object viewed with a static background.
Figure 12: Watercolor-like results. The first line shows the original photograph and the resulting watercolor-like image. The second line
shows other filtered photographs. The third line shows a gouraud shaded 3d scene and the resulting watercolor rendering with two versions
of the Venus model without and with morphological and normal smoothing. The fourth line shows more watercolor-like 3d models.
Figure 13: Comparison with Curtis result: (a) original image, (b) Curtis watercolor result, (c,d) our results.
The number of particles used to render the animation has to be a References
compromise between two effects: Too many particles would tend
to blur the high frequencies due to texture blending whereas not B URGESS , J., W YVILL , G., AND K ING , S. A. 2005. A system for
enough particles creates scrolling effects due to independent move- real-time watercolour rendering. In CGI: Computer Graphics
ments of each patch of texture. Some distortion occurs at pixels far International, 234240.
from attachment points, but in practice, using at least 6 attachment
points minimized this effect. Ideally the number of particles should C HU , N. S.-H., AND TAI , C.-L. 2005. Moxi: real-time ink disper-
be adapted to the viewpoint. For example after a zoom, the particles sion in absorbent paper. In Siggraph 05, ACM Press, 504511.
may become too far from each other and some new particles should C OMANICIU , D., AND M EER , P. 1999. Mean shift analysis and
be added. applications. In ICCV (2), 11971203.
Calculating flow textures interactively matched the movement of
the turbulent flow and pigment dispersion exactly with the motion C UNZI , M., T HOLLOT, J., PARIS , S., D EBUNNE , G., G ASCUEL ,
of the objects in the scene, making it useful for complex animations J.-D., AND D URAND , F. 2003. Dynamic canvas for immersive
in immersive environments. Unfortunately, this method is much non-photorealistic walkthroughs. In Graphics Interface, A K
more expensive to compute since between 1-10 noise textures and Peters, LTD., 121129.
1-2 turbulence textures must be interactively constructed for every C URTIS , C. J., A NDERSON , S. E., S EIMS , J. E., F LEISCHER ,
frame, yielding as few as 1 frame a second for 60k triangles. Fi- K. W., AND S ALESIN , D. H. 1997. Computer-generated water-
nally, small scale popping artifacts may be visible yet we found this color. In Siggraph 97, ACM Press, 421430.
was not very noticeable for complex turbulence textures possibly
due to the low alpha value assigned to such high frequency features. E BERT, D. 1999. Simulating nature: From theory to application.
We allowed the user to control several variables (such as which fre- In Siggraph Course. ACM Press.
quencies to use) in order to construct the pigment and turbulence
textures. This yielded a wide range of visual styles yet those styles J OHAN , H., H ASHIMOTA , R., AND N ISHITA , T. 2005. Creating
did not always exactly correspond with those produced by the static watercolor style images taking into account painting techniques.
method. Reasonable fidelity to the static method was achievable as Journal of Artsci, 207215.
long as the frequencies we used corresponded well with the static K APLAN , M., AND C OHEN , E. 2005. A generative model for
pigment repartition texture. dynamic canvas motion. In Computational Aesthetics, 4956.
L AKE , A., M ARSHALL , C., H ARRIS , M., AND B LACKSTEIN ,
7 Conclusions M. 2000. Stylized rendering techniques for scalable real-
time 3d animation. In NPAR: International symposium on Non-
We have presented a watercolor rendering technique that is fully photorealistic animation and rendering, ACM Press, 1320.
controllable by the user, allowing the production of either coherent L EI , E., AND C HANG , C.-F. 2004. Real-time rendering of wa-
animations or images starting from a 3d model or a photograph. tercolor effects for virtual environments. In PCM: Pacific Rim
Our framework is interactive and intuitive, recreating the abstract Conference on Multimedia, 474481.
quality and visual effects of real watercolors.
Each step of the pipeline can still be improved, especially by of- L UFT, T., AND D EUSSEN , O. 2005. Interactive watercolor ani-
fering the user a choice between slower but better methods for each mations. In PG: Pacific Conference on Computer Graphics and
effect. This can be suitable for producing movies when high quality Applications, 79.
is most important. Ideally our idea would be to keep our pipeline
as a WYSIWYG interface for preparing an offline full computation L UM , E. B., AND M A , K.-L. 2001. Non-photorealistic rendering
of more precise effects. using watercolor inspired textures and illumination. In PG: Pa-
cific Conference on Computer Graphics and Applications, 322
As mentioned in the paper, there are several issues that we want
331.
to address in the future. The diffusion effect would be interesting
to recreate. We can think of using work like [Chu and Tai 2005] M EIER , B. J. 1996. Painterly rendering for animation. In Siggraph
to perform diffusion computation but it opens the question of how 96, ACM Press, 477484.
to control such an effect when dealing with an already given scene.
To stay in our philosophy we have to think of simple and intuitive P ERLIN , K. 1985. An image synthesizer. In Siggraph 85, ACM
ways of deciding which part of the image must be concerned with Press, 287296.
the diffusion. P ERLIN , K. 2002. Improving noise. In Siggraph 02, ACM Press,
A lot of questions remain concerning the temporal coherence. 681682.
The methods we propose do not target the morphological smooth-
ing step or any image processing. Taking inspiration from video S MALL , D. 1991. Modeling watercolor by simulating diffusion,
tooning [Wang et al. 2004] we would like to address this problem pigment, and paper fibers. In SPIE, vol. 1460, 140146.
in our pipeline. Another possibility would be to use image filters
like in [Luft and Deussen 2005] to decrease the popping effects T URK , G. 1990. Generating random points in triangles. In Graph-
without adding too much computation. ics Gems I, A. Glassner, Ed. Academic Press.
VAN L AERHOVEN , T., L IESENBORGS , J., AND VAN R EETH ,
F. 2004. Real-time watercolor painting on a distributed paper
Acknowledgments model. In CGI: Computer Graphics International, 640643.
This research was funded in part by the INRIA Action de Recherche VAN ROOJEN , J. 2005. Watercolor Patterns. Pepin Press / Agle
Cooprative MIRO (www.labri.fr/perso/granier/MIRO/). Rabbit.
Thanks to Laurence Boissieux for the 3d models Fig 1-(a,b), 4 WANG , J., X U , Y., S HUM , H.-Y., AND C OHEN , M. F. 2004.
c
(
Laurence Boissieux INRIA 2005). Thanks to Gilles Debunne Video tooning. In Siggraph 04, ACM Press, 574583.
for the reviewing and the web site.