4D Movies Capture People In Clothing, Creating Realistic Virtual Try-On
August 11, 2017

4D Movies Capture People In Clothing, Creating Realistic Virtual Try-On

STUTTGART, GERMANY — Researchers at the Max Planck Institute for Intelligent Systems (MPI-IS) have developed technology to digitally capture clothing on moving people, turn it into a 3D digital form, and dress virtual avatars with it. This new technology makes virtual clothing try-on practical. 
Traditional virtual clothing try-on involves getting the 2D clothing pattern from the manufacturer, sizing this to a body, and simulating how the clothing drapes on the body. The new technique replaces garment simulation with garment capture. Capturing and transferring existing garments to new people greatly simplifies the process of virtual try-on. 


“Our approach is to scan a person wearing the garment, separate the clothing from the person, and then rendering it on top of a new person,” says Dr. Gerard Pons-Moll, research scientist at MPI-IS and principal investigator of the project. “This process captures all the detail present in real clothing, including how it moves, which is hard to replicate with simulation.” 



ClothCap uses 4D movies of people recorded with a 4D high-resolution scanner (3dMD Ltd.). The system uses 66 cameras and projectors to illuminate the person being scanned. 

“This scanner captures every wrinkle of clothing at high resolution. It is like having 66 eyes looking at a person from every possible angle.” says Michael Black, director at MPI-IS. “This allows us to study humans in motion like never before.”
 
Like any movie, you can replay it but you cannot change the actors or their clothing. Instead ClothCap computes the body shape and motion under clothing while separating and tracking the garments on the body as it moves. 

“The software turns the captured scans into separate meshes corresponding to the clothing and the body,” says Dr. Sergi Pujades, postdoctoral researcher at MPI and one of the main authors of this work. 

Traditional marker based motion capture record only the skeletal motion; placing hundreds of those markers on the clothing is unpractical and it is not well understood how to map the captured clothing to new characters. ClothCap makes it easy because the clothing is captured in correspondence with the body. 

“The algorithm literally subtracts the clothing from the recorded subject and adds it to new body to produce a realistic result,” says Gerard Pons-Moll. “It’s like doing arithmetic with people and their clothing, it’s cool!”
  
The work has several limitations. For example, cloth wrinkles do not change with body shape and ClothCap does not allow the synthesis of novel motions. The team plans to address such limitations in future work.