Skip to content

Meta-Guide.com

Menu
  • Home
  • About
  • Directory
  • Videography
  • Pages
  • Index
  • Random
Menu

1652914074

Posted on 2022/11/09 by mendicott

**From the Winter Olympics coverage to the Communist Youth League interaction, the iFLYTEK AI virtual human interaction platform accelerates the landing**

Remember the “3D Virtual Bingbing” that made a stunning appearance at the Beijing Winter Olympics?

Now, she is vivid, sweet and lovely, and she has come to lead everyone on the history of the online quiz. Recently, in order to celebrate the 100th anniversary of the founding of the Communist Youth League of China, the Central Propaganda Department of the Communist Youth League and the Youth League Committee of the Central Radio and Television Station, together with iFLYTEK, jointly launched a quiz on the history of the Communist Youth League. Share youth stories.

Using iFLYTEK’s latest speech synthesis, AI lip expression drive, customized 3D avatar and other artificial intelligence technologies, the virtual human not only has a three-dimensional avatar shape, voice, tone, and body movements that are comparable to real people, but also has appearance, behavior, It also supports 31 languages â??â??and dialects such as Northeastern dialect and English.

From the knowledge popularization of the Beijing Winter Olympics to the interaction of answering questions on the group history, with the sweet voice and highly anthropomorphic features, there are countless fans of 3D virtual ice rings.

In addition to the lovely “3D Virtual Bingbing”, iFlytek’s virtual human family, such as virtual anchor Xiaoqing and virtual person Aijia, are entering thousands of households. The interactive platform is the core base to unlock more application scenarios and empower the virtual human ecology.

AI-driven empowerment has become the mainstream trend of the industry

According to the “White Paper on the Development of China’s Virtual Human Industry in 2022”, the current virtual human can be divided into two categories. One is the generalized virtual human, which refers to the completion of the virtual human appearance through CG modeling and other methods, and then through the joint motion capture of Zhongzhiren , Face capture technology to realize the driven virtual human. The other is to use AI technology to create, drive, and generate content for virtual humans in a â??one-stopâ? way, with automatic interaction capabilities such as perception and expression without human intervention. In the future, AI-driven supernatural virtual humans will become the mainstream trend in the industry, promoting the ecological construction of the virtual human industry.

In a broad sense, virtual image modeling and driving generally have problems such as high labor costs, high technical thresholds, and long production cycles. It can be applied to some application scenarios of virtual humans, but in the future, it is difficult to meet the diverse needs of a single technology and traditional motion capture effects. application scenarios.

He Shan, head of virtual image technology research at iFLYTEK AI Research Institute, said that AI-driven expression and action synthesis technology will have broader industry application prospects, and virtual human technology is moving towards high expressiveness, fine-grainedness and personalization direction of development.

The “3D Virtual Bingbing” created by iFLYTEK is not only close to the real person in terms of sound, shape, movement and other factors, in order to inject more emotion into the virtual human, we also combine the cognitive psychology theory to design the interactive emotion of the virtual human. And use the unsupervised semantic model of massive text to classify and predict the emotions of various texts, and through the humanized “end-to-end” emotion synthesis system, realize the multi-dimensional interaction process of 3D virtual human voice emotion, facial expression, body movement and so on. Emotions run through, further enhancing the virtual human interaction experience.

As the virtual human industry enters the growth stage led by AI technology, He Shan believes that the intelligent level of virtual image construction and driving methods will become higher and higher, and the dimensions of virtual human emotion and body movement expression will also be richer.

Virtual human application is faster, more flexible and more convenient

If the metaverse is likened to a spaceship heading into the future, then a virtual human is the ticket to get on the ship.

Facing the large-scale scene requirements of the future Metaverse, iFLYTEK AI virtual human interaction platform uses a number of AI technologies to rapidly construct virtual images and mass-produce content, which can effectively improve production efficiency, reduce labor costs, and make virtual human applications faster , flexibility and convenience.

Integrating core artificial intelligence technologies such as speech recognition, semantic understanding, speech synthesis, and avatar drive, iFLYTEK has been going deep into the virtual human track in the past two years to build a three-dimensional AI ecosystem.

In October 2021, iFLYTEK released the virtual human interaction platform 1.0 strategy, which has four key features: multimodal perception, emotional penetration, multidimensional expression, and independent customization. Innovatively released technologies such as virtual human generation, driving, and multi-modal interaction, and built the Xunfei AI virtual human interaction platform, which not only enables rapid construction of virtual humans, but also provides AI-driven, API access, personalized image customization, and multi-scene solutions , to achieve a one-stop virtual image creation service.

At the same time, iFLYTEK actively cooperates with industry partners to build a virtual human ecological platform. At present, it has provided virtual human solutions for more than 400 enterprise customers in media, finance, cultural tourism, government affairs, e-commerce and other industry scenarios.

Let virtual human become human’s partner

Since 2018 when iFlytek AI virtual anchor “Kang Xiaohui” appeared on CCTV and the world’s first multilingual virtual anchor Xiaoqing was released, iFLYTEK has built a one-stop virtual anchor video production and editing service system for audio and video content production. AI virtual human interaction systems and terminals for industry-wide human-computer interaction services, as well as AI virtual human live broadcast systems for e-commerce live broadcast scenarios and other vertical products.

In April this year, several lists of the 2021 AI Era & Metaverse Innovation Awards of the domestic artificial intelligence authority Xinzhiyuan were announced. iFLYTEK won the AI â??â??Innovation Enterprise Award TOP30, the Metaverse Pioneer Award TOP30, and the iFLYTEK virtual person Aijia also won the The honor of the 2021 Metaverse New Human TOP10.

Currently, iFLYTEKâ??s AI virtual human interaction platform has formed four mature product systems, which are committed to making the development and deployment of virtual humans more rapid, flexible and convenient, and are continuously applied to scenarios such as marketing, management, and cost reduction. Improve efficiency, reduce costs, improve service experience, and help enterprise services achieve digital and intelligent upgrades.

In 2022, iFLYTEK announced the launch of the “Ultra Brain 2030 Plan”, which will continue to build an AI virtual human family under the digital economy, providing professional virtual human beings that can effectively help human beings for the digital economy, virtual world and metaverse, and have professional knowledge that can be customized , sustainable evolution, temperature and other characteristics.

iFLYTEK will continue to increase investment in technology. On the one hand, it will improve the technical level of virtual humans in the dimensions of perception, expression, and emotion, so that human-computer interaction is more realistic and vivid; on the other hand, the AI â??â??virtual human interaction platform supports independent customization, including voice , image, clothing, scenes and other diversified customizations, and cooperate with ecological partners to build a richer digital asset library, so that virtual people can be personalized.

In the future, I hope that every enterprise and everyone can easily obtain their own virtual human, help human beings to work and live better, and make virtual human our partners!

http://china.qianlong.com/2022/0517/7199877.shtml

  • Why Meta’s Billion-Dollar Offers Failed to Lure Mira Murati’s AI Team
  • Meta Cries Uncle, Marcus Endicott Reinstated after 55 Days in Facebook Jail
  • Zuckerberg Emerges as Murdoch 2.0 in the Age of Algorithmic Power
  • From Social Networks to Synthetic Feeds the Long Arc from MySpace to Meta
  • Larry Ellison Powers the Rise of Militarized AI Through Oracle’s Alliance with Meta

Popular Content

New Content

 

Contents of this website may not be reproduced without prior written permission.

Copyright © 2011-2025 Marcus L Endicott

©2025 Meta-Guide.com | Design: Newspaperly WordPress Theme