#zerrinyumakcom
**Kazi Injamamul Haque’s doctoral research**, showcased at Siggraph Asia 2023, delves into the realm of data-driven 3D facial animation synthesis for digital humans, recognizing the paramount importance of accurate and realistic facial animations in ensuring user immersion. Given that faces serve as primary interaction points in both real and virtual worlds, any slight deviation in facial movement can lead to a disrupted experience. Traditional animation methods, though realistic, are labor-intensive and unable to cater to the surging demand for 3D content, especially with the rise of the metaverse. Consequently, Haque’s research employs deep learning techniques to ascertain if they can emulate the realism and quality of performance capture. The study meticulously differentiates emotional expressions from speech and compares the efficacy of deterministic and non-deterministic models. Vision-based reconstruction is employed to curate emotion-labeled synthetic datasets. A highlight of this research is ‘FaceXHuBERT’, a proposed encoder-decoder system translating audio input into emotionally rich 3D facial animations, holding immense potential for real-time animations in gaming, cinema, and the rapidly evolving metaverse.