**[GDC23] Onmind “AI development will make virtual humans more human”**
From the 20th (local time), Korean virtual humans Y and TK took a seat at the GDC 2023 Unity booth in San Francisco, USA.
These two virtual humans, developed using Unity’s Digital Human 2.0 package, are the successors to the virtual human Sua, which was first unveiled in 2021.
During this GDC 2023 period, Kim Hyung-il, CEO of Onmind, and Kim Beom-ju, general manager of Unity Korea, who developed these virtual humans that look more like real people with more realistic facial expressions, skin texture, and hair expression, etc. We were able to meet at the Unity headquarters located near the center.
Regarding his impressions of introducing his virtual human at GDC 2023, CEO Kim Hyung-il said, â??I am honored that it was an opportunity to become a global company by participating in GDC following CES in a situation where it was established as a start-up not long ago. I hope that this will be an opportunity for the company to be recognized more globally.â?
Director Kim Beom-ju also said, “We are trying to introduce excellent technology content companies in Korea to the Glover market so that they can take advantage of many opportunities. As the first case, collaboration with Onmind seems to have led to the GDC session presentation, so I think it is meaningful.” answered.
Regarding Y and TK, which were introduced at GDC 2023, CEO Hyung-il Kim said that wrinkles and blood flow can be expressed through a tension map that measures the increase and decrease of blood flow to the face.
â??The most difficult thing in creating all virtual humans is the expression of hair and fur. Basically, there are tens of thousands of strands, and it is very thin, so it is difficult to render them. All of this can be solved with Unity Hair, which is included in the Unity Digital Human 2.0 package. there was,” he said.
It was also explained that various facial expressions were implemented using Unity’s machine learning-based facial expression generation solution. The sea face function is a function that creates high-quality facial expressions in a matter of hours.
When asked what level of perfection the virtual human seems to have reached, it was evaluated that it had reached its limit in terms of visuals.
Representative Kim said, “I think we have reached the limit in terms of appearance. The reason why we intentionally posted two photos in the virtual human making video was to compare the real human and the virtual human. However, in order for the virtual human to become more human, move I have to speak,” he said.
He continued, “For this, facial expressions, movements, and sophisticated motion capture are required. If emotional expressions are added to this, I think a character similar to the person we think will appear.”
He also predicted that the rapid development of AI would speed up the development of virtual humans.
Emotional expression through voice is also discussed as an element that will add a sense of reality to virtual humans. Unity said that research for this is already underway through the Unity engine.
Director Kim Beom-joo explained, “The Unity engine has a system that can link the voice API. In addition, there are cases where chatGPT is used in the Unity engine to develop tutorials that allow users to communicate with NPCs in the game.” .
As the quality of virtual humans increases, the number of cases in which advertisements or contents using them are being released is also increasing. However, it is also true that it is still limited to showing the already completed motion for tens of seconds to several minutes.
CEO Kim Hyung-il was confident that he would be able to show more diverse things as a virtual human beyond these limitations.
Representative Kim said, â??What I was drawing in the end was to hold a live concert as a virtual human. However, there were a lot of mountains to overcome for this. I needed a motion capture device and a large space to put it, and a device that can capture fingers with a high degree of difficulty. was also needed,” he said.
He continued, “Although it took a long time to prepare this, I think we will show proper live concert content within this year. I think that only then will content that can be interactive content, not a one-way video that is structured, come out.” did.
Director Kim Beom-joo said that Unity continues to support the development of more realistic virtual humans. He briefly introduced a new technology for this, saying that it is important to make not only the virtual human itself but also the background environment realistic in order to recognize the virtual human as if it were real.
Director Kim said, “Real-time ray tracing technology, a technology that realistically processes reflected light, has been stabilized and its performance is improving, so it can be used to create a realistic background that matches the face of a virtual human. Also, the technology of Speedtree, a plant modeling toolkit company recently acquired. It will also be of great help in making natural objects more realistic.â?
The two characters evaluated the future potential of virtual humans very positively. In particular, it attracted attention by predicting that the synergistic effect would be great if AI and virtual humans were linked.
CEO Kim Hyung-il said that virtual humans developed in 3D would have great advantages when combined with generative AI, and predicted that it would be a great opportunity for Onmind, which develops 3D virtual humans.
Director Kim Beom-ju said, “When virtual humans are linked with generative AI, it will be possible to simulate human-to-human relationships through conversations. In addition, it will develop into a friend-like existence that can have conversations on various topics, not just performances or advertisements. I hope it can,” he said.
https://zdnet.co.kr/view/?no=20230324183429
https://zdnet.co.kr/view/?no=20230324183429