


default search action
SIGGRAPH Asia 2010 Sketches: Seoul, Republic of Korea
- Marie-Paule Cani, Alla Sheffer:

ACM SIGGRAPH ASIA 2010 Sketches, Seoul, Republic of Korea, December 15-18, 2010. ACM 2010, ISBN 978-1-4503-0523-5 - Lingfeng Yang:

Modeling player performance in rhythm games. 1:1-1:2 - Shin'ichi Kawamoto, Tatsuo Yotsukura, Satoshi Nakamura, Junya Yamamoto, Tsunenori Shirahama, Hakuei Yamamoto:

Integrating lip-synch into game production workflow: "Sengoku BASARA 3" (Copyright restrictions prevent ACM from providing the full text for this article). 2:1 - Kenneth Chan, Koji Mikami

, Kunio Kondo:
Measuring interest in linear single player FPS games. 3:1-3:2 - Hwan-Soo Yoo, Seong-Whan Kim, Ok-Hue Cho:

EDGE: an easy design tool of game event for rapid game development. 4:1-4:2 - Matthias Nieser, Jonathan Palacios, Konrad Polthier, Eugene Zhang:

Hexagonal global parameterization of arbitrary surfaces. 5:1-5:2 - Natapon Pantuwong, Masanori Sugimoto:

Skeleton-growing: a vector-field-based 3D curve-skeleton extraction algorithm. 6:1-6:2 - Kangying Cai, Weiwei Li, Weiliang Meng, Wencheng Wang, Zhibo Chen, Xin Zheng:

Robust discovery of partial rigid symmetries on 3D models. 7:1-7:2 - René Weller, Gabriel Zachmann

:
ProtoSphere: a GPU-assisted prototype guided sphere packing algorithm for arbitrary objects. 8:1-8:2 - Jo Skjermo

, Torbjørn Hallgren:
Virtual heritage production as a tool in education. 9:1-9:2 - Junming Jimmy Peng, Wolfgang Müller-Wittig:

Understanding Ohm's law: enlightenment through augmented reality. 10:1-10:2 - Sriranjan Rasakatla:

Solar system and gravity using visual metaphors and simulations. 11:1-11:2 - Martin Breidt, Heinrich H. Bülthoff

, Cristóbal Curio:
Face models from noisy 3D cameras. 12:1-12:2 - Arthur Niswar, Ee Ping Ong

, Zhiyong Huang:
Pose-invariant 3D face reconstruction from a single image. 13:1-13:2 - Katsuhisa Kanazawa, Kazushi Urabe, Tomoaki Moriya, Tokiichiro Takahashi:

An image query-based approach for urban modeling. 14:1-14:2 - Vinh Ninh Dao, Masanori Sugimoto:

A correspondence matching technique of dense checkerboard pattern for one-shot geometry acquisition. 15:1-15:2 - Shinji Ogaki

, Yusuke Tokuyoshi, Sebastian Schoellhammer:
An empirical fur shader. 16:1-16:2 - Ramón Montoya-Vozmediano:

Point-based hair global illumination. 17:1 - Yutaka Goda, Tsuyoshi Nakamura, Masayoshi Kanoh

:
Texture transfer based on continuous structure of texture patches for design of artistic Shodo fonts. 18:1-18:2 - Shinji Ogaki

:
Direct ray tracing of Phong Tessellation. 19:1-19:2 - Graham Fyffe:

Single-shot photometric stereo by spectral multiplexing. 20:1-20:2 - Kaori Kikuchi, Bruce Lamond, Abhijeet Ghosh, Pieter Peers, Paul E. Debevec:

Free-form polarized spherical illumination reflectometry. 21:1-21:2 - Abhishek Dutta

, William A. P. Smith
:
Minimal image sets for robust spherical gradient photometric stereo. 22:1-22:2 - Shiming Zou, Hongxin Zhang, Xavier Granier

:
Shading-interval constraints for normal map editing. 23:1-23:2 - Di Cao, Rick Parent:

Electrostatic dynamics interaction for cloth. 24:1-24:2 - Ryoichi Ando, Reiji Tsuruno:

High-frequency aware PIC/FLIP in liquid animation. 25:1-25:2 - Tomoaki Moriya, Tokiichiro Takahashi:

A real time computer model for wind-driven fallen snow. 26:1-26:2 - Witawat Rungjiratananon, Yoshihiro Kanamori, Tomoyuki Nishita:

Elastic rod simulation by chain shape matching with twisting effect. 27:1-27:2 - William Wai-Lam Ng, Clifford S. T. Choy

, Daniel Pak-Kong Lun
, Lap-Pui Chau
:
Synchronized partial-body motion graphs. 28:1-28:2 - J. P. Lewis

, Nebojsa Dragosavac:
Stable and efficient differential inverse kinematics. 29:1-29:2 - Martin Prazák, Rachel McDonnell

, Carol O'Sullivan
:
Perceptual evaluation of human animation timewarping. 30:1-30:2 - Takeshi Miura, Kazutaka Mitobe, Takaaki Kaiga, Takashi Yukawa, Katsubumi Tajima, Hideo Tamamoto:

Derivation of dance similarity from balance characteristics. 31:1-31:2 - Wee Teck Fong, Cher Jingting, Farzam Farbiz

, Zhiyong Huang:
Sub-100 grams ungrounded haptics device for 14-g impact simulation. 32:1-32:2 - Alexis Andre:

OtoMushi: touching sound. 33:1-33:2 - Amit Bleiweiss, Dagan Eshar, Gershom Kutliroff, Alon Lerner, Yinon Oshrat, Yaron Yanai:

Enhanced interactive gaming by blending full-body tracking and gesture animation. 34:1-34:2 - Kazuki Kumagai, Tokiichiro Takahashi:

PETICA: an interactive painting tool with 3D geometrical brushes. 35:1-35:2 - Denis Kravtsov, Oleg Fryazinov

, Valery Adzhiev, Alexander A. Pasko, Peter Comninos:
Real-time controlled metamorphosis of animated meshes using polygonal-functional hybrids. 36:1-36:2 - Hyunjun Lee, Minsu Ahn, Seungyong Lee:

Displaced subdivision surfaces of animated meshes. 37:1-37:2 - Galina Pasko, Denis Kravtsov, Alexander A. Pasko:

Space-time blending with improved user control in real-time. 38:1-38:2 - Xiaojuan Ning, Xiaopeng Zhang, Yinghui Wang:

Automatic architecture model generation based on object hierarchy. 39:1-39:2 - Clemens Sielaff:

EmoCoW: an interface for real-time facial animation. 40:1-40:2 - Rachel McDonnell

, Martin Breidt:
Face reality: investigating the Uncanny Valley for virtual faces. 41:1-41:2 - David Komorowski, Vinod Melapudi, Darren Mortillaro, Gene S. Lee:

A hybrid approach to facial rigging. 42:1-42:2 - Gene S. Lee:

Automated target selection for DrivenShape. 43:1-43:2 - Codruta O. Ancuti, Cosmin Ancuti, Chris Hermans, Philippe Bekaert:

Fusion-based image and video decolorization: (Copyright restrictions prevent ACM from providing the full text for this article). 44:1 - Codruta O. Ancuti

, Cosmin Ancuti, Chris Hermans, Philippe Bekaert:
Layer-based single image dehazing by per-pixel haze detection. 45:1-45:2 - Weiming Dong, Guanbo Bao, Xiaopeng Zhang, Jean-Claude Paul:

Fast local color transfer via dominant colors mapping. 46:1-46:2 - Koichiro Honda, Takeo Igarashi:

NinjaEdit: simultaneous and consistent editing of an unorganized set of photographs (Copyright restrictions prevent ACM from providing the full text for this article). 47:1 - Tae-Yong Kim, Oliver Palmer, Nathan Litke:

Simulating bull drool in "Knight and Day". 48:1 - Aaron Lo, Jiayi Chong, David Ryu:

Simulation-Aided Performance: behind the coils of Slinky Dog in "Toy Story 3" (Copyright restrictions prevent ACM from providing the full text for this article). 49:1 - Jae-Ho Nah

, Yoon-Sig Kang, Kwang-Jo Lee, Shin-Jun Lee, Tack-Don Han, Sung-Bong Yang:
MobiRT: an implementation of OpenGL ES-based CPU-GPU hybrid ray tracer for mobile devices. 50:1-50:2 - Andrei Sherstyuk, Sally Olle, Jim Sink:

Blue Mars chronicles: building for millions. 51:1-51:2 - Jie Feng, Yang Liu, Bingfeng Zhou:

Real-time stereo visual hull rendering using a multi-GPU-accelerated pipeline. 52:1-52:2 - Chris Wyman

:
Interactive voxelized epipolar shadow volumes. 53:1-53:2 - Toshiya Hachisuka, Henrik Wann Jensen:

Parallel progressive photon mapping on GPUs. 54:1 - Jae-Ho Nah

, Jeong-Soo Park, Jin-Woo Kim
, Chanmin Park, Tack-Don Han:
Ordered depth-first layouts for ray tracing. 55:1-55:2

manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.


Google
Google Scholar
Semantic Scholar
Internet Archive Scholar
CiteSeerX
ORCID














