#ismpgde
Sign language (SL) is used by around 70 million Deaf people globally and learning tools typically include video dictionaries of isolated signs. To improve learning and accessibility, particularly for AR/VR applications, researchers have developed a system called **SGNify**, which can create expressive 3D avatars from SL video footage, overcoming challenges like occlusion, noise, and motion blur. By introducing universal linguistic priors to sign language, SGNify is able to resolve ambiguities and capture detailed hand poses, facial expressions, and body movements from monocular SL videos. The method outperforms existing 3D body-pose- and shape-estimation techniques on SL videos, and a perceptual study revealed that SGNify’s 3D reconstructions are significantly more understandable and natural than previous methods, matching the comprehension level of the source videos.