- Events (5)
- Blockchain (11)
- Education (21)
- Resources (9)
- Organizations (31)
- Metaverse (19)
- Military (3)
- Tools (7)
- Training (5)
- Medical (12)
- Petaverse (1)
- Mocap (17)
- Studio (25)
- Countries (384)
- Fashion (13)
- People (4)
TELYUKA is a Japanese husband-and-wife 3DCG artist duo, Akihiko and Yuka Ishikawa, known for creating the hyper-realistic digital human "Saya," a CG high school girl that gained wide attention for its realism. Since its debut in 2015, Saya has evolved from a static image into an interactive digital human capable of real-time conversation and emotional expression through integration with AI technologies such as NEC's generative AI and Aisin's multimodal dialogue systems. TELYUKA's work emphasizes artistic and humanistic values within digital human creation, distinguishing their work from purely commercial avatars. They have collaborated with companies like NEC and Aisin to explore applications in partner AI systems and customer service, and their projects have been featured at academic conferences and in 8K video displays. Their approach combines high-end visual fidelity with research into empathetic, lifelike digital communication, making Saya a pioneering example of digital human innovation in Japan.
Tencent offers a comprehensive suite of AI-powered digital human solutions across its cloud and AI platforms. These include highly realistic 2D and 3D avatars, cartoon-style figures, and NeRF-based models that can mimic real humans in facial features, expressions, and movements. Their services range from content creation tools (like Tencent Cloud’s Avatar Editor and ZenVideo) to full-featured virtual hosts, livestreaming assistants, and business-facing digital human APIs. Users can generate avatars from just a photo and audio, automate lip-syncing and gestures, and deploy digital humans in sectors such as entertainment, education, e-commerce, and virtual customer service. Tencent has also open-sourced its HunyuanVideo-Avatar model, allowing developers to create speaking and singing avatars from a single image and short audio clip. These tools are designed for low entry barriers (e.g., 3-minute training videos), support multilingual capabilities, and enable rapid deployment across web, mobile, and broadcast platforms. Through platforms like Yuanqi and Hunyuan, Tencent is actively pushing personalized, scalable AI-driven virtual agents into mainstream adoption.
Tencent Cloud’s AI Digital Human platform provides a comprehensive suite of cloud-based tools for creating and deploying hyper-realistic, interactive virtual humans across industries. These digital humans support real-time lip-syncing, facial expression replication, and voice cloning in multiple languages, and are used for customer service, broadcasting, education, virtual hosting, and tourism. Tencent offers multiple solutions including the Xiaowei Digital Humans and the Sonic avatar model co-developed with Zhejiang University, enabling fast avatar creation from minimal video samples. The platform supports video generation, avatar-based livestreaming, and integration with AI models like DeepSeek. Tencent Cloud promotes its digital human technology as a key driver of smart enterprise transformation, offering APIs, realistic 3D avatar libraries, and customization capabilities tailored for varied business scenarios.
Tencent Interactive Entertainment (Tencent IEG) is Tencent’s comprehensive interactive entertainment brand, positioned as a global leader in the sector. It encompasses multiple business platforms including Tencent Games, Tencent Literature, and Tencent Animation, delivering products and services across online games, literature, comics, drama, film, and television. The group follows a “pan-entertainment” strategy, leveraging intellectual property across media formats to create interconnected ecosystems. Through this approach, Tencent IEG aims to integrate creative content production, publishing, and distribution, combining gaming with other cultural and entertainment industries to provide users with a diverse range of interactive experiences.
Tencent IEG has been deeply involved in virtual human development, applying the technology across gaming, entertainment, cultural, and commercial contexts. Its work includes high-fidelity real-time digital humans such as “Siren,” developed by the NEXT Studios technology team, which showcases advanced rendering of facial features, skin, hair, and expressions to near-photorealistic quality. Tencent IEG has integrated virtual humans into projects like TMELAND, a virtual music world in QQ Music and WeSing, and created proprietary characters such as "Jilly", "Xingtong", and "Tong Heguang". These digital humans have been used for live performances, interactive experiences, museum guides, esports hosting, and media productions. The company’s XR division and various R&D teams have explored combining virtual humans with AI, motion capture, and machine learning to enable realistic expressions, lip-sync, and autonomous interaction.
Tencent’s GameAISDK is an open-source toolkit for developing game AI based on image recognition. It enables automated game testing by detecting UI elements, recognizing in-game objects, and applying AI algorithms such as DQN and imitation learning to control gameplay. The system supports game genres like endless runners, battle royale, shooters, and MOBAs, and is composed of modules including AI Client (interacts with the mobile device), IO, MC, UI, GameReg, and AI logic. It can be deployed locally or in the cloud, with AIClient capturing real-time mobile game images and sending them to AI services for processing. The SDKTool helps generate game-specific configuration files for UI workflows and scene recognition, allowing the AI to interact with games like a human player. The platform supports model training with YOLO for object detection, UI auto-exploration to map game interfaces, and customization for new AI algorithms or recognition modules.
Tencent Music Entertainment Group (TME) is China’s leading online music and audio entertainment company, publicly traded on the NYSE under the symbol TME and majority-owned by Tencent Holdings. Headquartered in Shenzhen, it operates four major platforms—QQ Music, Kugou Music, Kuwo Music, and WeSing—which together exceed 800 million monthly active users. TME offers streaming music, online karaoke, live music, and other interactive audio-visual entertainment, integrating social features and copyright-protected content across a vast licensed catalog. The company also supports independent artists through the Tencent Musician platform, providing distribution, promotion, rights management, monetization, and training. In addition to consumer services, TME develops industry tools such as Lyra Lab’s AI-driven technologies, including music recognition, predictive analytics, and virtual media creation.
Tencent Music Entertainment’s Lyra Lab (lyracobar.y.qq.com) hosts an online platform providing open datasets and research tools for music recognition and analysis technologies. Its core offerings include LyraC-Net for cover song recognition, singer timbre identification, and humming-based music search, many of which have achieved state-of-the-art results in conferences like Interspeech 2022 and IJCN 2021. The site distributes multiple datasets collected from QQ Music’s licensed library and authorized user recordings: Lyra-SA (singing voice dataset with isolated vocals, MIDI, and lyrics), Lyra-CS (cover song dataset with original and cover/live fragments across languages and genres), and Lyra-QBH (query-by-humming dataset with recordings from male and female users covering 100 tracks). These resources are designed for academic and professional use, with applications in audio fingerprinting, music retrieval, and singing evaluation. The platform serves as a hub for researchers to download datasets after applying through its bilingual Chinese/English interface.
Tencent Music Entertainment’s Lyra Lab has become a major R&D hub for AI-driven virtual human technologies, working closely with Tencent’s Hunyuan Video Model team. On May 28, 2025, they jointly released and open-sourced HunyuanVideo-Avatar, a speech-driven digital human generation model that can produce high-fidelity talking or singing videos from just a single image and an audio file. Built on the HunyuanVideo large video model and Lyra Lab’s MuseV framework, it supports multiple camera framings (head-and-shoulders, half-body, full-body), various visual styles, emotional control, and multi-character dialogues. MuseV, along with related open-source tools MuseTalk (real-time lip-sync) and MusePose (pose-guided animation), enables unlimited-length, high-quality virtual human video generation. Lyra Lab has also developed music-driven virtual human systems such as Music XR Maker and the “QinLe” large model in collaboration with Tencent AI Lab, capable of generating singing performances, dance animations, and instrument gestures. Since 2022, the lab has debuted original virtual humans like “Xiao Qin” and “Xiao Tian,” applied its motion-driven photo animation technology in QQ Music projects, and integrated its solutions into TME’s entertainment platforms, making it a leader in music-powered digital human applications in China.
Tencent Research Institute, through its website TISI.org, has actively explored the development, trends, and applications of digital humans in the context of the virtual world and digital economy. It has published key reports such as the Digital Human Industry Development Trend Report (2023), which examines how technological progress, evolving user needs, ecosystem improvements, and regulatory standards drive the growth of the digital human sector. Topics include the transition from visually appealing avatars to intelligent agents with personality, the impact of generative AI on content creation, and legal implications such as portrait rights and voice synthesis. TISI also discusses integration of digital humans in livestreaming, entertainment, and enterprise services, and addresses compliance issues under China’s Civil Code. The institute serves as a thought leader in shaping the future of embodied AI and virtual identity within the broader digital ecosystem.
Sogou has been actively developing digital human technologies since 2018, emphasizing its strength in language and AI. The company launched its first digital human that year, powered by a proprietary system called the “Sogou Avatar” platform. This technology integrates hyper-realistic 3D modeling, machine translation, multimodal generation, transfer learning, and real-time facial animation. Sogou has produced AI-driven avatars for news broadcasting, including hand-signing AI anchors, and collaborated with platforms like Sohu to create celebrity-based digital human hosts. These avatars are used in various applications such as live-streaming, automated content creation, customer service, and AI education. The company’s AI open platform (ai.sogou.com) supports digital human cloning, customized avatars, and virtual influencer creation.
Tencent is Sogou’s parent company: after first taking a minority stake and folding its own Soso search into Sogou in 2013, Tencent moved to acquire all outstanding Sogou shares in July 2020 and completed the go-private merger on September 23, 2021, delisting Sogou from the NYSE and making it a wholly owned subsidiary; Sogou’s assets and teams were then integrated into Tencent’s search and content ecosystem, leadership changed (Wang Xiaochuan departed), and Sogou products such as search and input method continued under Tencent’s control.
Tencent Yuanqi (yuanqi.tencent.com) is an AI digital human creation and distribution platform developed by Tencent. It allows users to build AI-powered virtual agents—referred to as “digital humans”—by uploading content such as WeChat messages, articles, and videos, and combining them with voice cloning and visual avatars. The platform supports text-based interaction, speech synthesis, and video creation, enabling individuals or businesses to create personal AI assistants, brand spokespeople, or customer service agents. Yuanqi is part of Tencent’s broader AI ecosystem and integrates with Tencent Cloud and its services.
Tencent’s ZenVideo (zenvideo.qq.com) is a cloud-based intelligent video creation platform that enables users to generate videos using AI-driven digital humans. It offers features such as 3D and 2D avatar broadcasting, text-to-speech, customizable virtual presenters, and real-time livestreaming with digital hosts. Users can input text to produce narrated videos without human presenters, saving costs and time. The platform supports voice cloning, expression control, and multilingual output, and is aimed at applications in media, marketing, education, and entertainment.
We create high-end visual content for brands, feature films, commercials, digital experiences, art instillations, and more.