Skinned Multi-Person Linear Model (SMPL) is an influential open-source statistical model of the human body that enables realistic digital human generation and analysis across diverse applications. Developed by researchers at Max Planck Institute for Intelligent Systems and University of Pennsylvania, SMPL was released in 2015 after publication of a paper at SIGGRAPH Asia conference.
At its core, SMPL realistically captures both the variation in human shape across people as well as how the shape deforms with pose. It is based on skinning and linear blend shapes that adapt appropriately as body pose is adjusted. For example, the upper arm deforms in a realistic manner as the associated joint angles are varied. Underlying SMPL are statistical models trained on thousands of 3D body scans to encode this correlation between body shape and motion. These models enable SMPL to synthesize natural looking human models that can walk, run, dance or perform any activity without sacrificing realism. The body models generated can hence serve as virtual avatars for computer games and animations with substantial detail and quality.
Unlike previous models, SMPL is designed to seamlessly integrate with modern content production pipelines for computer graphics. For instance, it can exported as an FBX file for use within leading animation software like Autodesk Maya or Unity game engine using the plugins and example code provided on the website. The runtime performance of SMPL also makes it suitable for virtual reality applications needing to render complex human models at high frame rates. These qualities explain its enthusiastic uptake for realistic human portrayal beyond academic circles into startups and industry applications since its open source release.
Under the hood, SMPL jointly models both identity-dependent body shape parameters encoding intrinsic size and proportions of a person as well as pose-dependent corrective terms that add realistic surface detail as joints rotate into different configurations. There are two standard versions of SMPL available on the website using 10 or 16 shape parameters respectively. These capture 80 to 90 percent of human shape variability in a compact representation. But subtle and long-tail shape phenomena are modeled by the full 300D PCA shape space accessible upon request. This demonstrates the care taken to incorporate real nuanced effects like muscle bulging and soft-tissue dynamics into SMPL. The image based SMPL variant called SMPLify has also been shown to estimate 3D body shape and pose from a single photo by fitting to silhouette cues automatically. This has potential applications in vision-based biometrics and human-computer interaction.
Given its impact, SMPL has fostered development of multiple derivatives over the past years. SMPL-X incorporates detailed animate models of the face, eyes and tongue to enable emotional facial expressions and speech. Similarly, MANO and SMPL+H focus on articulated hand motion targeted at gestures and manipulation tasks critical for virtual reality. There are also blended models supporting aspects like soft-tissue collision dynamics for high fidelity corporeal simulations. All resources around these enhanced formulations can be found linked from the SMPL website for convenience. The consistent underlying framework of SMPL ensures they remain interoperable and extensible as new capabilities are added through future work. Overall, SMPL demonstrates how a thoughtfully designed parametric model can spur an ecosystem of innovations that gain widespread adoption. Its evolution continues at MPI which recently introduced STAR – a higher precision statistical body model building firmly upon fundamentals established by SMPL over years of maturing as a platform.