Landmarks for Dressing Avatars at Scale - 25.31
Title:
Landmarks for Dressing Avatars at Scale
Authors:
Ye FAN 1,2 Seung Heon SHEEN 1, Clarissa MARTINS 1, Dinesh K. PAI 1,2
1 Vital Mechanics Research Inc., Canada;
2 University of British Columbia, Canada
Keywords:
Clothing, Landmarks, Avatars, Fitting, Simulation
Abstract:
Dressing avatars in realistic clothing is a key requirement in many settings, including digital fashion, feature films, video games, and the metaverse. Achieving a good initial placement is extremely important, both to capture the intended use of the garment and for physics-based simulation. For example, the waistline of a pair of jeans can be placed at different heights depending on the style and preference. Mainstream tools such as Clo3D and VStitcher provide interactive tools for artists to place a garment on an avatar to achieve the desired look and fit for a single avatar. This is a time-consuming process of manual tweaking, especially for complex, multilayered garments. Moreover, this process is not scalable for dressing multiple avatars, as the manual tweaking process must be repeated for each avatar.
A key reason is that the semantics of how a garment is intended to be worn are not captured explicitly by existing tools. We propose the use of garment landmarks to capture the intended semantics. Garment landmarks, like body landmarks, locate meaningful points, such as the top of the waistline. Garment landmarks only need to placed once, during the design, and paired with corresponding body landmarks. Body landmarks are widely used in anthropometry, and may be built into an avatar, or predicted using machine learning models. The garment's placement is then automatically adjusted to achieve placement intent by solving an optimization problem that minimizes the distance between the garment landmarks and the body landmarks, subject to other requirements described below. This simple yet powerful idea allows a garment to be easily placed on multiple avatars according to the desired intent, with minimal manual effort.
We will demonstrate how this new tool has been successfully implemented in our VitalFit virtual fit
testing system, and can achieve complex placements of multilayered garments. It handles multiple
garment layers, fixes existing mesh tangling, and scales naturally once a good garment placement is
available on one avatar.
Abstract:
Presentation:
VIDEO will be available here in Q3.2026.
VIDEO availble in proceedings (purchase order)
How to Cite (MLA):
Y. Fan et al., "Landmarks for Dressing Avatars at Scale", Proceedings of 3DBODY.TECH 2025 - 16th International Conference and Expo on 3D/4D Body Scanning, Data and Processing Technologies, Lugano, Switzerland, 21-22 Oct. 2025, #31
Details:
Proceedings: 3DBODY.TECH 2025, 21-22 Oct. 2025, Lugano, Switzerland
Paper/Presentation: #31
DOI: -
License/Copyright notice
Proceedings: © Hometrica Consulting - Dr. Nicola D'Apuzzo, Switzerland, hometrica.ch.
Authors retain all rights to individual papers, which are licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
The papers appearing in the proceedings reflect the author's opinions. Their inclusion in the proceedings does not necessary constitute endorsement by the editor or by the publisher.
Note: click the + on the top left of the page to open/close the menu.