Anatomy Transfer


Ali Hamadi Dicko
INRIA / LJK-CNRS
Université de Grenoble
 
Tiantian Liu
University of Pennsylvania
 
Benjamin Gilles
LJK-CNRS
Université de Grenoble
 
Ladislav Kavan
University of Pennsylvania
 

François Faure
INRIA / LJK-CNRS
Université de Grenoble
 
Olivier Palombi
INRIA
Université de Grenoble
 
Marie-Paule Cani
INRIA / LJK-CNRS
Université de Grenoble
 


A reference anatomy (left) is automatically transferred to arbitrary humanoid characters. This is achieved by combining interpolated skin correspondences with anatomical rules.



Abstract

Characters with precise internal anatomy are important in film and visual effects, as well as in medical applications. We propose the first semi-automatic method for creating anatomical structures,such as bones, muscles, viscera and fat tissues. This is done by transferring a reference anatomical model from an input template to an arbitrary target character, only defined by its boundary representation (skin). The fat distribution of the target character needs to be specified. We can either infer this information from MRI data, or allow the users to express their creative intent through a new editing tool. The rest of our method runs automatically: it first transfers the bones to the target character, while maintaining their structure as much as possible. The bone layer, along with the target skin eroded using the fat thickness information, are then used to define a volume where we map the internal anatomy of the source model using harmonic (Laplacian) deformation. This way, we are able to quickly generate anatomical models for a large range of target characters, while maintaining anatomical constraints.






Publication

Ali Hamadi Dicko, Tiantian Liu, Benjamin Gilles, Ladislav Kavan, François Faure, Olivier Palombi, Marie-Paule Cani. Anatomy Transfer. ACM Transaction on Graphics 32(6) [Proceedings of SIGGRAPH Asia], 2013.  


Links and Downloads

Paper

 
BibTeX



Acknowledgements

Many thanks to Laura Paiardini and Armelle Bauer for 3D modeling and kind support. We would also like to thank the anonymous reviewers for their detailed comments and feedback. This work was partly funded by the French ANR SoHusim, the ERC Expressive and CNRS Semyo projects.