Mike Seymour (Meet Mike)


Resources:

  • fxguide.com .. vfx and 3d news by mike seymour
  • fxphd.com .. take your vfx skills to a higher level by mike seymour
  • motuslab.org .. researching digital humans (avatars, agents and actors)

References:

See also:

100 Best Personal AI VideosDigital Double & Digital Humans 2018Synthetic Humans 2017


[Mike Seymour]

  • Actors, Avatars and Agents: Potentials and Implications of Natural Face Technology for the Creation of Realistic Visual Presence (2018)

Abstract: We are on the cusp of creating realistic, interactive, fully rendered human faces on computers that transcend the “uncanny valley,” widely known for capturing the phenomenon of “eeriness” in faces that are almost, but not fully realistic. Because humans are hardwired to respond to faces in uniquely positive ways, artificial realistic faces hold great promise for advancing human interaction with machines. For example, realistic avatars will enable presentation of human actors in virtual collaboration settings with new levels of realism; artificial natural faces will allow the embodiment of cognitive agents, such as Amazon’s Alexa or Apple’s Siri, putting us on a path to create “artificial human” entities in the near future. In this conceptual paper, we introduce natural face technology (NFT) and its potential for creating realistic visual presence (RVP), a sensation of presence in interaction with a digital actor, as if present with another human. We contribute a forward-looking research agenda to information systems (IS) research, comprising terminology, early conceptual work, concrete ideas for research projects, and a broad range of research questions for engaging with this emerging, transformative technology as it becomes available for application. By doing so, we respond to calls for “blue ocean research” that explores unchartered territory and makes a novel technology accessible to IS early in its application. We outline promising areas of application and foreshadow philosophical, ethical, and conceptual questions for IS research pertaining to the more speculative phenomena of “living with artificial humans.”

Abstract: Disciplines often approach phenomena from different perspectives and with different research tools. We offer this example of our efforts to embrace the wider CHI values through the exploration of emotive digital humans deployed in HCI. We designed and conducted an HCI experiment with mixed methods. In building an infrastructure that benefits from the strengths of both AIS SIGHCI and ACM SIGCHI research communities, we chose an approach that could reveal undisclosed worlds, hard to see from just one perspective. As technology offers HCI digital humans, new combined shared approaches may be needed to gain insights, especially prior to their wide scale deployment. As bridging related disciplines have failed in the past, perhaps a new approach is needed, one of shared experiences, especially when exploring new technological phenomenon.

Abstract: What should our ethical concerns be in a future with ‘Artificially Intelligent’ agents? The zeitgeist of AI agents often envisions a future encompassing a hyper intelligent singularity. In this worldview, AI “monsters” appear very separate from us as, abstracted, ethically ungrounded omnipotent overlords. A world of superintelligences that have moved beyond our comprehension, with no ethical restraint. In this polemic, I explore a different future. I discount the ‘Robopocalypse’ initially depicted in Science Fiction. Instead, I examine how realistic digital humans do pose a very real and different ethical dilemma, as we assume intelligence based on their appearance, leading to an abdication of responsibility. I phenomenologically explore the future of realistic digital agents and avatars, and ask: what does this human-like form say about us? How will we judge ourselves when the computer, looks like us? I argue that the singularity is unlikely and thus the primary ethical concern is not some superhuman AI intelligence, but in how we, ourselves, treat these digital humans.

Abstract: Meet Mike uses the latest techniques in advanced motion capture to drive complex facial rigs to enable detailed interaction in VR. This allows participants to meet, talk in VR and experience new levels of photorealistic interaction. The installation uses new advances in real time rigs, skin shaders, facial capture, deep learning and real-time rendering in UE4.

Abstract: The 40-year-old Uncanny Valley theory is influential in the discussion surrounding acceptance of realistic graphical agents. This theory was formulated by observing robots. While it has been shown to be valid when observing digital characters, little has been studied about acceptance when people interact with avatars, rather than simply observe a recording. The emerging technology that will soon be able to create realistic avatars challenges the conventional view built on this theory, that affinity is a function of ‘appearance’, necessitating a reevaluation of the dimensions of the problem. We introduce a broader theoretical foundation with an additional dimension, namely interactivity. Studies that have validated the Uncanny Valley used still images or clips from existing media, but did not explore interactivity. In this study we suggest that interactivity operates on an independent, orthogonal dimension to ‘appearance’, and that interaction can ‘overcome the valley’ in affinity due to matching and common human non-verbal cues. We hypothesize that these cause the user to process the avatar differently. We contribute to the literature a new way to theorize the relationship between avatar realism and affinity, including both avatar appearance and interaction, and outline a research design to study this relationship.

Conclusion: Engaging face to face with an interactive computer model requires autonomy with contextual responsiveness. If visually consistent, realistic appearance and movement seem to increase the sensory intensity of the experience. Internally consistent generative models enable cognitive, affective, and physiological factors that drive facial behavior to be produced coherently, justifying a lowerlevel more biologically based modeling approach than has previously been taken with virtual human faces. Exploring these elements together allows new yet familiar phenomena to occur. New, because we do not normally experience this sort of interaction with computers, familiar because we do with people. Being able to simulate the underlying drivers of behavior, realistic appearance and real-time interaction together deliver three aspects of interaction, but virtually: Explore. Allows us to explore how the interplay of biologically based systems can give rise to an emotionally affecting experience on a visceral, intuitively relatable human level; Include movements. Applies an embodied-cognition approach to include the subtle and unconscious movements of the face as a crucial part of mental development and social learning; and Understand key requirements. Gives a basis for understanding the key requirements for more natural and adaptive HCI in which the interface has a face. The virtual infant BabyX is not an end unto itself but allows researchers to study and learn about the nature of human response. There is a co-defined dynamic interaction where one can adjust to BabyX no longer as a simulation but as a personal encounter. In summary, the enormous complexity of modeling human behavior and dyadic interaction cannot be overestimated, but naturalistic autonomous virtual humans who embody and process theoretical models of our behavior and reflect them back at us may give us new insight into core aspects of our nature and interaction with other people–and future machines.

Abstract: This study explores presentation techniques for a chat-based virtual human that communicates engagingly with users. Interactions with the virtual human occur via a smartphone outside of the lab in natural settings. Our work compares the responses of users who interact with an animated virtual character as opposed to a real human video character capable of displaying realistic backchannel behaviors. An audio-only interface is compared additionally with the two types of characters. The findings of our study suggest that people are socially attracted to a 3D animated character that does not display backchannel behaviors more than a real human video character that presents realistic backchannel behaviors. People engage in conversation more by talking for a longer amount of time when they interact with a 3D animated virtual human that exhibits realistic backchannel behaviors, compared to communicating with a real human video character that does not display backchannel behaviors.

  • Smart Mobile Virtual Characters: Video Characters vs. Animated Characters (2016)

Abstract: This study investigates presentation techniques for a chatbased virtual human that communicates engagingly with users via a smartphone outside of the lab in natural settings. Our work compares the responses of users who interact with an animated 3D virtual character as opposed to a real human video character capable of displaying backchannel behaviors. The findings of our study demonstrate that people are socially attracted to a 3D animated character that does not display backchannel behaviors more than a real human video character that presents realistic backchannel behaviors. People engage in conversation more by talking for a longer amount of time when they interact with a 3D animated virtual human that exhibits backchannel behaviors, compared to communicating with a real human video character that does not display backchannel behaviors.

Abstract: Technological developments are bringing interactive computer agents, such as Apple’s Siri, into our everyday lives and routines. These interactive agents are designed to be the focus of our interactions – we can feel ” present ” with them. Yet current theories of ” presence ” in IS do not account for the question of what it means to be present with technology in an experiential sense. In response we draw on existential philosophy in order to generate a research agenda for conceptualising presence in the context of what we term human-computer engagement. We suggest that research from this new perspective requires focusing on the situated interaction rather than an a-priori assessment of the entities involved. We conclude by considering the ethical questions that emerge when technology is experienced as being an independent agent with which one can be present.

Abstract: Agile Project Management (APM) has gained strong acceptance in software development but its adoption in other industries has not been as swift. We look at the visual effects (VFX) component of the film industry to explore this issue. Using an abductive research approach combined with a survey of existing practices, we aim to investigate an industry whose projects are large, expensive and time critical. Our study hopes to show that VFX companies exhibit many characteristics conducive to APM adoption but it is only within their internal software development teams that they explicitly state their use of APM. We explore why these companies, who exhibit predisposed adoption characteristics use something other than Agile for their non-software related projects. In exploring this surprising position, we hope to gain insights into how other industries may adopt APM and to set a research agenda for APM in non-software development creative companies. 

Abstract: Spherical Harmonics is an important tool in the rendering of movie effects. It is also very common in games, but there are few people who really understand the meaning of spherical harmonics and their skills. The spherical harmonic function (SH) is a data representation, and that’s it. But like the Fourier Transform (FT), SH’s data transformation can process large data sets in a short time and achieve amazing image effects, which was considered impossible a few years ago.

Abstract: The work of the visual algorithms community (for example the work of SIGGRAPH Technical Papers authors) frequently affects real-world film post production. But often academics in the relevant fields have little idea of the tools and algorithms actually involved in day-to-day post production. This course surveys a range of typical tools and algorithmic techniques currently used in post production and shows how some emerging technologies may change these techniques in the future. The course attempts to demystify some of the processes and jargon involved, both to enlighten an academic audience and inspire new contributions to the industry.

(Visited 35 times, 1 visits today)