#greejp #siggraphorg
Alexander Barato, a student at the Tokyo University of Technology, presented a new method for user-generated content (UGC) creation involving head-mounted displays at a virtual 2023 exhibition. This method, called an AI-assisted Avatar fashion show, facilitates the creation of avatars and choreographies using cut work motion as base animation.
Key components of the system include:
* **Meta Ink**: A clothing texture generation system using generative AI that takes user input to output clothing textures.
* **Muscle Compressor**: A system that captures and compresses motion using a normalized humanoid posture representation. It allows saving humanoid motion at a lower size.
* **Runway Broadcaster**: A system that stages a catwalk, automatically loads and combines avatar motions, and evaluates them.
The system integrates VR elements such as 3D clothing texture generation and working animation from user motion. It employs a unique approach using Discrete Fourier Transform to evaluate and match body motion frequencies.
According to the results, the muscle compressor system can save motion at a smaller size compared to other systems, enabling dynamic runtime motion generation. The evaluation function in the Runway broadcaster was able to filter broken postures, and a certain range of scores allowed for quality control.
The presentation concluded that the method represents a significant advance in using generative AI to create UGC, respecting the creator’s intentions, and achieving what would have been challenging for humans alone. It received positive feedback at the public exhibition level, with 99.2% of users able to generate content without issue.