3D morphable models are low-dimensional parametrizations of 3D object classes which provide a powerful means of
associating 3D geometry to 2D images. However, morphable models are currently generated from 3D scans, so for general object
classes such as animals they are economically and practically infeasible. We show that, given a small amount of user interaction
(little more than that required to build a conventional morphable model), there is enough information in a collection of 2D pictures
of certain object classes to generate a full 3D morphable model, even in the absence of surface texture. The key restriction is that
the object class should not be strongly articulated, and that a very rough rigid model should be provided as an initial estimate of
the 'mean shape'.
The model representation is a linear combination of subdivision surfaces, which we fit to image silhouettes and any identifiable
key points using a novel combined continuous-discrete optimization strategy. Results are demonstrated on several natural object
classes, and show that models of rather high quality can be obtained from this limited information.
Full MATLAB source code is available from CodePlex.
The CodePlex release also contains our data sets for bananas, pigeons, polar bears and (of course) dolphins.
See the included documentation to reproduce our results.
@article{Cashman:2013:WSD,
author = {Thomas J. Cashman and Andrew W. Fitzgibbon},
title = {What shape are dolphins? Building {3D} morphable
models from {2D} images},
journal = {IEEE Transactions on Pattern Analysis and Machine
Intelligence},
volume = 35,
number = 1,
pages = {232--244},
year = 2013
}