default search action
41st SIGGRAPH 2014: Vancouver, Canada
- Special Interest Group on Computer Graphics and Interactive Techniques Conference, SIGGRAPH '14, Vancouver, Canada, August 10-14, 2014, Posters Proceedings. ACM 2014, ISBN 978-1-4503-2958-3
Animation systems and techniques
- Vipin Patel, G. B. C. S. Tejaswi Vinnakota, Soumyajit Deb, Manjunatha R. Rao:
A 3D animation and effects framework for mobile devices. 1:1 - Tomokazu Ishikawa, Kento Okazaki, Masanori Kakimoto, Tomoyuki Nishita:
A video summarization technique of animation products according to film comic format. 2:1 - Masaki Sato, Jun Kobayashi, Tomoaki Moriya, Yuki Morimoto, Tokiichiro Takahashi:
An icicle generation model based on the SPH method. 3:1 - Sophie Jörg, Alison E. Leonard, Sabarish V. Babu, Kara Gundersen, Dhaval Parmar, Kevin Boggs, Shaundra Bryant Daily:
Character animation and embodiment in teaching computational thinking. 4:1 - Rubaiat Habib Kazi, Fanny Chevalier, Tovi Grossman, Shengdong Zhao, George W. Fitzmaurice:
DRACO: sketching animated drawings with kinetic textures. 5:1 - Changgu Kang, Leonard Yoon, Min Seok Do, Sung-Hee Lee:
Environment-adaptive contact poses for virtual characters. 6:1 - Takuya Kato, Shunsuke Saito, Masahide Kawai, Tomoyori Iwao, Akinobu Maejima, Shigeo Morishima:
Example-based blendshape sculpting with expression individuality. 7:1 - Or Avrahamy, Mark Shovman:
From pain to happiness: interpolating meaningful gait patterns. 8:1 - Syuhei Sato, Yoshinori Dobashi, Kei Iwasaki, Hiroyuki Ochiai, Tsuyoshi Yamamoto, Tomoyuki Nishita:
Generating various flow fields using principal component analysis. 9:1 - Naoya Iwamoto, Shigeo Morishima:
Material parameter editing system for volumetric simulation models. 10:1 - Todd Keeler, Robert Bridson:
Ocean waves animation using boundary integral equations and explicit mesh tracking. 11:1 - Chie Furusawa, Tsukasa Fukusato, Narumi Okada, Tatsunori Hirai, Shigeo Morishima:
Quasi 3D rotation for hand-drawn characters. 12:1 - Felix Herbst, Alexander Schulze:
Real-time approximation of convincing spider behaviour. 13:1 - Rina Tanaka, Hiroshi Mori, Fubito Toyama, Kenji Shoji:
Real-time avatar motion synthesis by replacing low confidence joint poses. 14:1 - Kakuto Goto, Naoya Iwamoto, Shunsuke Saito, Shigeo Morishima:
The efficient and robust sticky viscoelastic material simulation. 15:1
Art and design
- Gerry Chan, Anthony D. Whitehead, Avi Parush:
An evaluation of personality type pairings to improve video game enjoyment. 16:1 - Fuka Nojiri, Yasuaki Kakehi:
BelliesWave: color and shape changing pixels using bilayer rubber membranes. 17:1 - Man Zhang, Jun Mitani, Yoshihiro Kanamori, Yukio Fukui:
Blocklizer: interactive design of stable mini block artwork. 18:1 - Momoko Okazaki, Ken Nakagaki, Yasuaki Kakehi:
metamoCrochet: augmenting crocheting with bi-stable color changing inks. 19:1 - Takaki Kimura, Yasuaki Kakehi:
MOSS-xels: slow changing pixels using the shape of racomitrium canescens. 20:1 - Stefan Petrovski, Panos Parthenios, Aineias Oikonomou, Katerina Mania:
Music as an interventional design tool for urban designers. 21:1 - Laura K. Murphy, Philip Galanter:
Stylized trees and landscapes. 22:1 - Michael Kuetemeyer, Anula Shetty:
Time Lens. 23:1 - Akira Nakayasu:
Waving tentacles: a system and method for controlling a SMA actuator. 24:1
Augmented and virtual realities
- Jinsil Hwaryoung Seo, James Storey, John Chavez, Diana Reyna, Jinkyo Suh, Michelle Pine:
ARnatomy: tangible AR app for learning gross anatomy. 25:1 - Corrie Colombero, Andrew J. Hunsucker, Pui Mo, Monét Rouse:
Augmented reality theater experience. 26:1 - Hikari Tono, Saki Sakaguchi, Mitsunori Matsushita:
Basic study on creation of invisible shadows by using infrared lights and polarizers. 27:1 - Tobias Alexander Franke:
Interactive relighting of arbitrary rough surfaces. 28:1 - Evangelia Mavromihelaki, Jessica Eccles, Neil A. Harrison, Hugo D. Critchley, Katerina Mania:
Cyberball3D+ for fMRI: implementing neuroscientific gaming. 29:1 - Yuta Ueda, Karin Iwazaki, Mina Shibasaki, Yusuke Mizushina, Masahiro Furukawa, Hideaki Nii, Kouta Minamizawa, Susumu Tachi:
HaptoMIRAGE: mid-air autostereoscopic display for seamless interaction with mixed reality environments. 30:1 - Ann McNamara, Laura Murphy, Conrad Egan:
Investigating the use of eye-tracking for view management. 31:1 - Naoki Hashimoto, Akane Tashiro, Hisanori Saito, Satoshi Ogawa:
Multifocal projection for dynamic multiple objects. 32:1 - Daisuke Kobayashi, Naoki Hashimoto:
Spatial augmented reality by using depth-based object tracking. 33:1
Geometry and modeling
- Chin-chia Tung, Tsung-Hua Li, Hong Shiang Lin, Ming Ouhyoung:
Cage-based deformation transfer using mass spring system. 34:1 - Masahiro Fujisaki, Daiki Kuwahara, Taro Nakamura, Akinobu Maejima, Takayoshi Yamashita, Shigeo Morishima:
Facial fattening and slimming simulation considering skull structure. 35:1 - Chen Liu, Yong-Liang Yang, Ya-Hsuan Lee, Hung-Kuo Chu:
Image-based paper pop-up design. 36:1 - Michal Smolik, Václav Skala:
In-core and out-core memory fast parallel triangulation algorithm for large data sets in E2 and E3. 37:1 - Syed Altaf Ganihar, Shreyas Joshi, Shankar Shetty, Uma Mudenagudi:
Metric tensor and Christoffel symbols based 3D object categorization. 38:1 - Ai Mizokawa, Taro Nakamura, Akinobu Maejima, Shigeo Morishima:
Photorealistic facial image from monochrome pencil sketch. 39:1 - C. Antonio Sánchez, Sidney S. Fels:
PolyMerge: a fast approach for hex-dominant mesh generation. 40:1 - Xuaner Zhang, Lam Yuk Wong:
Virtual fitting: real-time garment simulation for online shopping. 41:1
Human-computer interactions
- Koharu Horishita, Syuhei Tsutsumi, Saki Sakaguchi, Mitsunori Matsushita:
A nonluminous display using fur to represent different shades of color. 42:1 - Yuto Uehara, Shinji Mizuno:
A virtual 3D photocopy system. 43:1 - Wataru Wakita, Hiromi T. Tanaka:
An unconstrained tactile rendering with tablet device based on time-series haptic sensing with bilateral control. 44:1 - Tomohiro Amemiya, Hiroaki Gomi:
Buru-Navi3: movement instruction using illusory pulled sensation created by thumb-sized vibrator. 45:1 - Chi-Chiang Huang, Rong-Hao Liang, Li-Wei Chan, Bing-Yu Chen:
Dart-It: interacting with a remote display by throwing your finger touch. 46:1 - Edgar Flores, Sidney S. Fels:
Design of a robotic face for studies on facial perception. 47:1 - Matt Adcock, Bruce H. Thomas, Chris Gunn, Ross T. Smith:
Enabling physical telework with spatial augmented reality. 48:1 - M. H. D. Yamen Saraiji, Yusuke Mizushina, Charith Lasantha Fernando, Masahiro Furukawa, Youichi Kamiyama, Kouta Minamizawa, Susumu Tachi:
Enforced telexistence. 49:1 - Mon-Chu Chen, Yi-Ching Huang, Kuan-Ying Wu:
Gaze-based drawing assistant. 50:1 - Sota Suzuki, Haruto Suzuki, Mie Sato:
Grasping a virtual object with a bare hand. 51:1 - Naoya Maeda, Maki Sugimoto:
Pathfinder vision: tele-operation robot interface for supporting future prediction using stored past images. 52:1 - Michelle Holloway, Cindy Grimm, Ruth West, Ross T. Sowell:
A guided approach to segmentation of volumetric data. 53:1 - Janelle Arita, Jinsil Hwaryoung Seo, Stephen Aldriedge:
Soft tangible interaction design with tablets for young children. 54:1 - Kevin Fan, Yuta Sugiura, Kouta Minamizawa, Sohei Wakisaka, Masahiko Inami, Naotaka Fujii:
Ubiquitous substitutional reality: re-experiencing the past in immersion. 55:1 - Leonardo Meli, Stefano Scheggi, Claudio Pacchierotti, Domenico Prattichizzo:
Wearable haptics and hand tracking via an RGB-D camera for immersive tactile experiences. 56:1
Image and video processing
- Ishtiaq Rasool Khan:
A new quantization scheme for HDR two-layer encoding schemes. 57:1 - Masahide Kawai, Tomoyori Iwao, Akinobu Maejima, Shigeo Morishima:
Automatic deblurring for facial image based on patch synthesis. 58:1 - Yusuke Sekikawa, Sang-won Leigh, Koichiro Suzuki:
Coded Lens: using coded aperture for low-cost and versatile imaging. 59:1 - Yong-Ho Lee, In-Kwon Lee:
Color correction algorithm based on local similarity of stereo images. 60:1 - Yasin Nazzar, Jonathan Bouchard, James J. Clark:
Detection of stereo window violation in 3D movies. 61:1 - Shunya Kawamura, Tsukasa Fukusato, Tatsunori Hirai, Shigeo Morishima:
Efficient video viewing system for racquet sports with automatic summarization focusing on rally scenes. 62:1 - Hisataka Suzuki, Rex Hsieh, Akihiko Shirai:
ExPixel: PixelShader for multiplex-image hiding in consumer 3D flat panels. 63:1 - Morgane Rivière, Makoto Okabe:
Extraction of a cartoon's topology. 64:1 - Takahiro Fuji, Tsukasa Fukusato, Shoto Sasaki, Taro Masuda, Tatsunori Hirai, Shigeo Morishima:
Face retrieval system by similarity of impression based on hair attribute. 65:1 - Yuji Aramaki, Yusuke Matsui, Toshihiko Yamasaki, Kiyoharu Aizawa:
Interactive segmentation for manga. 66:1 - Judith E. Fan, Daniel Yamins, James J. DiCarlo, Nicholas B. Turk-Browne:
Mapping core similarity among visual objects across image modalities. 67:1 - Jérémy Riviere, Pieter Peers, Abhijeet Ghosh:
Mobile surface reflectometry. 68:1 - Ding Chen, Ryuuki Sakamoto:
Optimizing infinite homography for bullet-time effect. 69:1 - Shunsuke Saito, Ryuuki Sakamoto, Shigeo Morishima:
Patch-based fast image interpolation in spatial and temporal direction. 70:1 - Paul Olczak, Jack Tumblin:
Photometric camera calibration: precise, labless, and automated with AutoLum. 71:1 - Kouta Takeuchi, Shinya Shimizu, Kensaku Fujii, Akira Kojima, Keita Takahashi, Toshiaki Fujii:
Scene-independent super-resolution for plenoptic cameras. 72:1 - Hsin-Wei Wang, Ming-Wei Chang, Hong Shiang Lin, Ming Ouhyoung:
Segmentation based stereo matching using color grouping. 73:1 - Pin-Hua Lu, Chien-Wen Chu, I-Chen Lin:
Stereoscopic architectural image inpainting. 74:1 - Joan Sol Roo, Christian Richardt:
Temporally coherent video de-anaglyph. 75:1 - Gregor Miller, Sidney S. Fels:
VisionGL: towards an API for integrating vision and graphics. 76:1 - Krzysztof Zielinski, Yi-Ting Tsai, Ming Ouhyoung:
Yet another vector representation for images using eikonal surfaces. 77:1
New hardware technologies
- Liana Manukyan, António Martins, Sophie A. Montandon, Michel Bessant, Michel C. Milinkovitch:
A versatile high-resolution scanning system and its application to statistical analysis of lizards' skin colour time-evolution. 78:1 - Jaeyoung Kim, Byongsue Kang, Shinyoung Rhee, Byengwol Kim, Hyeonjin Yun, Junghwan Sung:
Bitcube: the new kind of physical programming interface with embodied programming. 79:1 - Shaohui Jiao, Haitao Wang, Mingcai Zhou, Xun Sun, Tao Hong:
Efficient sub-pixel based light field reconstruction on integral imaging display. 80:1 - Shunsuke Yoshida:
Implementations toward interactive glasses-free tabletop 3D display. 81:1 - Reza Qarehbaghi, Hao Jiang, Bozena Kaminska:
Nano-Media: multi-channel full color image with embedded covert information display. 82:1 - Yoichi Ochiai, Takayuki Hoshi, Jun Rekimoto:
Pixie dust: graphics generated by levitated and animated objects in computational acoustic-potential field. 83:1 - Hisham Bedri, Micha Feigin, Michael Everett, Ivan Filho, Gregory L. Charvat, Ramesh Raskar:
Seeing around corners with a mobile phone?: synthetic aperture audio imaging. 84:1 - Kazuhisa Yanaka:
Simple projection-type integral photography system using single projector and fly's eye lens. 85:1 - F. Ferreira, Marcio Cabral, Olavo Belloc, Gregor Miller, Celso Setsuo Kurashima, Roseli de Deus Lopes, Ian Stavness, Júnia Coutinho Anacleto, Marcelo Knörich Zuffo, Sidney S. Fels:
Spheree: a 3D perspective-corrected interactive spherical scalable display. 86:1 - Jefferson Amstutz, Scott Shaw, Lee Butler:
Visually programming GPUs in VSL. 87:1 - Ungyeon Yang, Ki-Hong Kim:
Wearable display for visualization of 3D objects at your fingertips. 88:1
Rendering
- Benjamin Bruneau, Matthias Segui Serera:
2D additive and dynamic shadows. 89:1 - Yasunari Ikeda, Issei Fujishiro, Toru Matsuoka:
An object space approach to shadowing for hair-shaped objects. 90:1 - Jin-Woo Kim, Jung-Min Kim, Min-Woo Lee, Tack-Don Han:
Asynchronous BVH reconstruction on CPU-GPU hybrid architecture. 91:1 - George Alex Koulieris, George Drettakis, Douglas W. Cunningham, Nikolaos Sidorakis, Katerina Mania:
Context-aware material selective rendering for mobile graphics. 92:1 - Christoph Müller, Fabian Gärtner:
Cross-compiled 3D web applications: problems and solutions. 93:1 - Yusuke Tokuyoshi, Tiago da Silva, Takashi Kanai:
Directionality-aware rectilinear texture warped shadow maps. 94:1 - Youyou Wang, Ozgur Gonen, Ergun Akleman:
Global illumination for 2D artworks with vector field rendering. 95:1 - Midori Okamoto, Shohei Adachi, Hiroaki Ukaji, Kazuki Okami, Shigeo Morishima:
Measured curvature-dependent reflectance function for synthesizing translucent materials in real-time. 96:1 - Ryohei Tanaka, Yuki Morimoto, Hideki Todo, Tokiichiro Takahashi:
Parametric stylized highlight for character animation based on 3D scene data. 97:1 - Xi M. Chen, Timothy Lambert, Eric Penner:
Pre-integrated deferred subsurface scattering. 98:1 - Pu Wang, Diana Bicazan, Abhijeet Ghosh:
Rerendering landscape photographs. 99:1 - Takashi Ejiri, Yuki Morimoto, Tokiichiro Takahashi:
Shading approach for artistic stroke thickness using 2D light position. 100:1 - Adrián Jarabo, Julio Marco, Adolfo Muñoz, Raul Buisan, Wojciech Jarosz, Diego Gutierrez:
Theory and analysis of transient rendering. 101:1 - Lukas Herrmanns, Tobias Alexander Franke:
Screen space cone tracing for glossy reflections. 102:1
Visualization
- Andrew Kenneth Ho, Mark A. Nicosia, Angela Dietsch, William Pearson, Jana Rieger, Nancy Solomon, Maureen Stone, Yoko Inamoto, Eiichi Saitoh, Sheldon Green, Sidney S. Fels:
3D dynamic visualization of swallowing from multi-slice computed tomography. 103:1 - Takefumi Hayashi, Narihito Naoe, Naho Komatsubara, Kenji Sumiya, Kay Yonezawa:
Development of cultural capital content using ultra-high resolution images. 104:1 - Abir Al Hajri, Matthew Fong, Gregor Miller, Sidney S. Fels:
How personal video navigation history can be visualized. 105:1 - Daiki Matsumoto, Yusuke Matsui, Toshihiko Yamasaki, Kiyoharu Aizawa, Takanori Katagiri:
IllustStyleMap: visualization of illustrations based on similarity of drawing style of authors. 106:1 - Chuong V. Nguyen, David R. Lovell, Rolf Oberprieler, Debbie Jennings, Matt Adcock, Eleanor Gates-Stuart, John La Salle:
Natural-color 3D insect models for education, entertainment, biosecurity and science. 107:1 - Donald Madden, Andrew W. Scanlon, Yunxian Zhou, Tae Eun Choe, Martin Smith:
Real time video overlays. 108:1 - Amol Mahurkar, Ameya Joshi, Naren Nallapareddy, Pradyumna Reddy, Micha Feigin, Achuta Kadambi, Ramesh Raskar:
Selective visualization of anomalies in fundus images via sparse and low rank decomposition. 109:1 - Matt Viehdorfer, Sarah Nemanic, Serena Mills, Mike Bailey:
Virtual dog head: using 3D models to teach complex veterinary anatomy. 110:1
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.