SIGGRAPH 2017 To Showcase 125+ Technical Papers
May 17, 2017

SIGGRAPH 2017 To Showcase 125+ Technical Papers

CHICAGO — SIGGRAPH 2017 has accepted of over 125 technical papers, which will be presented during this year's conference in Los Angeles, from July 30th through August 3rd. Submissions came from around the world and SIGGRAPH 2017 accepted 127 juried technical papers (out of 439 submissions) for this year's showcase, an acceptance rate of 28 percent. Forty papers from ACM Transactions on Graphics (TOG), the foremost peer-review journal in the graphics world, will also be presented. 
"Among the trends we noticed this year was that research in core topics, such as geometry processing or fluid simulation, continues while the field itself broadens and matures," SIGGRAPH 2017 technical papers program chair Marie-Paule Cani notes. "The 14 accepted papers on fabrication now tackle the creation of animated objects as well as of static structures. Machine learning methods are being applied to perception and extended to many content synthesis applications. And topics such as sound processing and synthesis, along with computational cameras and displays, open novel and exciting new directions."

Of the juried papers, the percentage breakdown based on topic area is as follows: 30 percent modeling, 25 percent animation and simulation, 25 percent imaging, 10 percent rendering; 4 percent perception, 3 percent sound, and 3 percent computational cameras and displays.

Highlights of the SIGGRAPH 2017 Technical Papers program include:

Inside Fluids: Clebsch Maps for Visualization and Processing
Authors: Albert Chern, California Institute of Technology; Felix Knöppel, Technische Universität Berlin; and, Ulrich Pinkall, Technische Universität Berlin 
Clebsch maps encode vector fields, such as those coming from fluid simulations, in the form of a function that encapsulates information about the field in an easily accessible manner. For example, vortex lines and tubes can be found by iso-contouring. This paper provides an algorithm for finding such maps.

Multi-Species Simulation of Porous Sand and Water Mixtures
Authors: Andre Pradhana Tampubolon, University of California, Los Angeles; Theodore Gast, University of California, Los Angeles; Gergely Klar, DreamWorks Animation; Chuyuan Fu, University of California, Los Angeles; Joseph Teran, Walt Disney Animation Studios, Disney Research, University of California, Los Angeles; Chenfanfu Jiang, University of California, Los Angeles; and, Ken Museth, DreamWorks Animation  
This multi-species model for simulation of gravity-driven landslides and debris flows with porous sand and water interactions uses the material point method and mixture theory to describe individual phases coupled through a momentum exchange term.

Real-Time User-Guided Image Colorization with Learned Deep Priors
Authors: Richard Zhang, University of California, Berkeley; Jun-Yan Zhu, University of California, Berkeley; Phillip Isola, University of California, Berkeley; Xinyang Geng, University of California, Berkeley; Angela S. Lin, University of California, Berkeley; Yu Tianhe, University of California, Berkeley; and, Alexei A. Efros University of California, Berkeley 
This paper proposes a deep learning approach for user-guided image colorization. The system directly maps a grayscale image, along with sparse, local user "hints" to an output colorization. The CNN propagates user edits by fusing low-level cues with high-level semantic information learned from large-scale data.

Dip Transform for 3D Shape Reconstruction
Authors: Kfir Aberman, Tel Aviv University, Advanced Innovation Center for Future Visual Entertainment; Oren Katzir, Tel Aviv University, Advanced Innovation Center for Future Visual Entertainment; Qiang Zhou, Shandong University; Zegang Luo, Shandong University; Andrei Sharf, Advanced Innovation Center for Future Visual Entertainment, Ben-Gurion University of the Negev; Chen Greif, The University of British Columbia; Baoquan Chen, Shandong University; and, Daniel Cohen-Or, Tel-Aviv University  
This paper presents a 3D acquisition and reconstruction method based on Archimedes submerged-volume equality. It employs fluid displacement as the shape sensor. The liquid has no line-of-sight. It penetrates cavities and hidden parts, as well as transparent and glossy materials, thus bypassing the visibility and optical limitations of scanning devices.

Dynamics-Aware Numerical Coarsening for Fabrication Design
Authors: Desai Chen, Massachusetts Institute of Technology; David Levin, University of Toronto; Wojciech Matusik, Massachusetts Institute of Technology; and, Danny Kaufman, Adobe Research  
This paper presents a simulation-driven optimization framework that, for the first time, automates the design of highly dynamic mechanisms. The key contributions are a method for identifying fabricated material properties for efficient predictive simulation, a dynamics-aware coarsening technique for finite-element analysis and a material-aware impact response model.

Registration is now open for SIGGRAPH 2017 (