Skip to content

Meta-Guide.com

Menu
  • Home
  • About
  • Directory
  • Bibliography
  • Videography
  • Pages
  • Index
  • Random
Menu

1677456585

Posted on 2023/07/24 by mendicott

**FBEC Conference | Wu Lianpeng, General Manager of Hisense Jukan AR/VR Division: Application Trends of Virtual Space and Digital Human Interaction**

The FBEC Future Business Ecosystem Link Conference was held on February 24, 2023 at the Sheraton Hotel, Futian, Shenzhen. The conference was guided by the Guangdong Game Industry Association and the Shenzhen Internet Cultural Market Association, and hosted by Gyro Technology.

The theme of the conference is “Moving Forward with Courage, Chasing the Light”, with the perspective of “explorers” with forward-looking insights into the industry, and the journey of “light” as the main line, focusing on Metaverse, XR, games, e-sports, and digital marketing and other cutting-edge industries, presenting the cutting-edge achievements of science and technology in an all-round way, discussing the era and business issues, planning the future value of new technologies, new businesses, and new models, and joining the peers of the times to embark on the courageous way of chasing light under the new era of drama change!

FBEC Main Venue C: The Power of Beliefâ??â??FBEC Global Metaverse CEO Summit is co-sponsored by Wuhan East Lake New Technology Development Zone Management Committee and Gyro Technology, and invited Wu Lianpeng, General Manager of AR/VR Division of Hisense Group Jukanan Technology Co., Ltd. Come to a wonderful speech on the theme of “Technical Application Trends of Virtual Space and Digital Human Interaction”. Wu Lianpeng believes that if we bypass the development of digital humans, it is meaningless to talk about the metaverse.

The following is the transcript of the speech:

I am very happy to have the opportunity to share and communicate with you. The previous guests shared from different business fields. I will start from the application direction of the basic “human + field” of the Metaverse, that is, the technical direction of digital human + virtual space. Cutting in, in this process, I hope to introduce Hisense Group’s thinking and what it is doing to everyone.

We are an Internet company under Hisense. This is the cloud service sector. From the bottom up, there is an enterprise-level, basic PaaS cloud platform. Currently, it includes the switching of PaaS platforms for private clouds in the digital transformation of State Grid and medium and large enterprises. In addition, there are PaaS cloud parts required under the general Internet architecture, as well as audio and video solutions, which are the construction of basic capabilities for the mobile Internet.

In 2016, under the strategic deployment of the group, we began to undertake the XR sector, including the exploration direction of glasses.

In 2020, we released the first dual 8K live VR cloud platform in China. This is a cloud platform based on VR video processing, codec processing, and transmission and distribution. The VR video live broadcast of this conference is also provided by our platform products. , which is a relatively mature application direction.

On this basis, in 2019, 2020, and 2021, we will continue to explore new solutions that combine the XR field with hardware and applications. Our route starts from the engine of the digital virtual human, and then to the application of the interactive activity platform based on the metaverse of human + virtual space.

Now everyone is talking about digital people, why is everyone talking about it, and some people still do it? We have been thinking about this question since 2019. At present, when the engine of the digital human has not become a general-purpose capability, if an enterprise wants to do a good job in the application and solution of the XR field, it cannot bypass the accumulation and breakthrough of some basic core technologies.

When talking about the metaverse, digital people will be mentioned first, and digital people will be the most talked about. In fact, five or ten years ago, the technology of virtual simulation, including the application of games, was very mature. Therefore, if we bypass the development of digital humans, it is meaningless to talk about the metaverse.

Based on the improvement of the efficiency and performance of digital human production, it is the basic logic driving the application of Metaverse. There are two cases here. One is the super-realistic digital human in Europe and the United States who are doing very well in the industry. The upper left corner is a relatively lightweight digital human. On the basis of the development of digital humans, there are explorations of digital humans in the Metaverse from all walks of life.

Although the modeling and rendering technology of digital humans is constantly improving, and various fields can be explored, it does not mean that every field will mature within a year or two. From our own thinking, such as the social metaverse, Meta has been invested for a long time, but the current highest user has only reached 200,000, and the DAU is still declining, so the direction of social applications that are social and big C-end , is not the direction we want to expand at present.

From a technical point of view, from the beginning of the construction of the digital human to the actual application in the Metaverse, what exactly needs to be done, and what is its application trend? The dismantling is the following five stages.

First, to produce people, this is the technology of modeling. The development trend of modeling technology is firstly to achieve higher accuracy, and secondly to obtain higher accuracy with lower cost and smaller computing power. From multi-view geometry, pure digital algorithm mode, to the development of deep learning large models.

Second, after the human being is built, it is necessary to move the facial movements. The direction of the movements is more convenient, so that people can use them without perception, so that they can be used better at the consumer level. Therefore, from the traditional optical and inertial motion capture Waiting for the development of a more lightweight single RGB camera that can be used in consumer applications.

Third, after moving, it needs to simulate the clothing and hair around the person.

Fourth, after the human production, actions, and surrounding clothing are done, the important direction is to be able to render these well.

Fifth, after the production of the first four stages, it can only be said that with such an image, there can be basic activities and interactions, but in the end it is still necessary to empower it with AI to give it a soul. Recently, many people have discussed ChatGPT. ChatGPT can be combined with digital humans, and it will soon change the application direction of many industries, especially the direction of digital intelligent assistants, which can replace some human resources and better demonstrate efficiency.

In terms of the division of digital humans, we have made a classification: from the most basic stylized digital humans, to realistic digital humans, and then to super-realistic digital humans.

Stylized digital humans, there are already many general-purpose models abroad, and many industry applications are based on these models. We believe that it is more suitable for games and lightweight entertainment rather than industry-level applications.

In 2020, Hisense launched its first metaverse TV industry conference, and it was also a stylized digital person at the time. But in the past one or two years, if we want to expand and scale up industry applications, we must advance in the direction of realism and more super-realism. The two parts of realistic and hyper-realistic digital humans are intersected, depending on the specific application scenario.

For example, a realistic digital human, based on a single picture, based on the number of faces within 10,000 to produce the image of a digital human, it is possible to achieve hundreds of thousands of concurrent interactions under the current terminal computing power and applications within this range.

This is one of the hyper-realistic images we created ourselves, and it now carries the role of our traffic agent. Our own super-realistic digital human creation technology has the same trend from academic to real implementation, that is, after high-precision scanning, AI can automate modeling, which can shorten the workload that originally required a professional team to do for several months. to within two weeks. When we say super-realistic, it not only means that it has 8K textures and pore-level precision, but more importantly, it is the facial expressions, as well as the refined processing of body and bone movements.

This is our case in the last two weeks. The background is the interviews of the five Central Asian countries in the group. This scene requires up to 6 minutes of broadcast content in Russian, which is unlikely to be completed in a conventional exhibition hall, but now through AI digital humans, including voice technology and automatic motion capture, such content can be produced within 2 hours to meet the application scenario demand. When we talk about metaverse and digital people, we still need to consider what kind of value is provided in what kind of scenario.

Another case is a more lightweight model, the accuracy is relatively not so high, but all the technologies are the scene of the specific implementation of the service. For example, in today’s conference, we have two different live broadcasts, one is a live broadcast based on VR video, and the other is a live broadcast based on metaverse activities. When facing hundreds or thousands of virtual people in the same space for activities and interactions When it comes to time, it is necessary to balance its pursuit of thousands of people, and the smoothness of computing power, precision, and real-time interaction. Different scenarios require different degrees of realism, which is the experience we have gained from the combined exploration of application landing scenarios. Most scenes may not need to achieve 95% realism, and may only need to reach more than 60% realism to achieve a smooth and balanced experience.

Our thinking on the direction of technical service scenarios is not suitable for general applications that want to put all enterprises and users in a large public space, but to focus on a small segmented scenario, Such as conference activities, virtual teaching and research activities, or other exhibition activities.

Now everyone is talking about digital human technology, but three years later, basic digital human capabilities, whether it is modeling, driving, or the ability to generate the entire AI after the cloud, may soon become infrastructure technology, and it will be like Now cloud computing, like big data public computing power, has become a basic capability. However, during this period of time, practitioners in the entire industry still need to clarify their own direction and make breakthroughs in key areas if they want to achieve real landing and realization.

Citing data from a report, in the post-epidemic era, the proportion of virtual activities and user acceptance are gradually increasing. In this case, we have a basic product architecture for each activity scene. In this architecture, whether it is digital human or cloud rendering, it is already a basic technical capability. Traditional virtual simulation education is also facing a more realistic and immersive development direction based on multi-person interactive remote space, which is also the subdivision scenario we will focus on in the future.

https://www.donews.com/news/detail/3/3381769.html

Popular Content

New Content

Virtual Human Systems: A Generalised Model (2021)

 

Contents of this website may not be reproduced without prior written permission.

Copyright © 2011-2025 Marcus L Endicott

©2025 Meta-Guide.com | Design: Newspaperly WordPress Theme