Avatars, Agency and Performance (2014).8

Avatars, Agency and Performance: The Fusion of Science and Technology within the Arts
Richard Andrew Salmon 2014

8. Pre Analysis: Key Concepts

This section presents key concepts that help to explain what has been found in the themed data analysis, how it impacts human . machine interaction and how the concepts help inform design refinements presented in section 9.

8.1 A Situated Robot with Physical Separation

Solving the problems presented by the various operational obstacles presently hindering the quasi-autonomous prolonged and successful operation of robots in human environments is not that simple. The successful navigation of autonomous robots in human environments is uniquely complicated, because of the vast number of variables presented by obstacles and moving objects in three-dimensional spaces. This is one of the reasons why we have what I term robotic devices as oppose to actual robots, the difference between the two is a grey area but it is not necessary to hold that debate herein, suffice to say that we are calling the Articulated Head a robotic device and a robot, because it was not free to move in space as a human can but it did claim and present humanoid features and capabilities to some extent. It is by virtue of the fact that robotic devices are limited or restricted in one dimension or another that they function successfully within the environment in which they are designed to operate, by simplifying the number of confounding considerations that must be taken into account during their design.

The Articulated Head was one such example of a robotic device that was restricted in one specific dimension; it was a situated robot that was fixed to the ground, and was therefore unable to move from its base. This significant fact simplified its operation greatly, because the list of navigational requirements for its successful operation was dramatically reduced. Furthermore the Articulated Head was separated from its audience by an enclosure. The Articulated Head’s robotic arm was capable of navigation in three-dimensional space within a limited half spear, with a radius of between two and three metres from its base. The enclosure was situated just outside this navigational boundary, this meant that it had no obstacles to navigate and was able to move freely within its immediate space. Therefore

Articulated Head did not actually have to navigate physically in a human environment, because its space was its own, and humans were effectively excluded from that space. This physical separation helps to explain the lack of references to the sense of touch in the empirical research data set, and also explains the caged animal zoo exhibit reports by research participants in Theme 3: Physical Presentation and Operational Aspects, but the Articulated Head and its environmental conditions were very much connected and operational in the human environment on almost every other level. The Articulated Head certainly navigated the human environment on auditory visual and mental planes.

8.1.1 Key Concept 1: Intentionality

Intentionality is especially relevant in relation to analysis of interactions that were taking place between humans and the Articulated Head, because it relates to the hopes, wants and beliefs of the interacting human.

Dennett’s intentional stance (D.C Dennett, 1989) describes an innate ability that we as human beings are endowed with. That is not to say that other biological beings are not endowed with this ability to a greater or lesser extent too – but for the purposes of this discussion only, we shall limit our frame of reference to human endowments. The innate ability described is that we are able to predict on a regular basis, and with a significant degree of accuracy, outcomes from a set of indicative circumstances, even if these predictions sometimes turn out to be incorrect. This predictive ability of the intentional stance (D.C Dennett, 1989) has at its heart intentionality. In what follows I explain what the term refers to and why it is important to this investigation of human . machine interaction.

“Intentionality is aboutness. Some things are about other things: a belief can be about icebergs, but an iceberg is not about anything; an idea can be about the number 7, but the number 7 is not about anything; a book or a film can be about Paris, but Paris is not about anything. Philosophers have long been concerned with the analysis of the phenomenon of intentionality, which has seemed to many to be a fundamental feature of mental states and events” (D. C Dennett & Haugeland, 2012, p. 1).

One might argue that an iceberg is about something, it is about an accumulation of frozen water, or that the number 7 is about the quantity or measure of something. However leaving this sematic argument aside, Dennett and Haugeland’s quote, most importantly, raises the analysis of the phenomenon of intentionality in this thesis, and more specifically the identification and analysis of phenomena, which appear to extend from it.

“Phenomena with intentionality point outside themselves, in effect, to something else: whatever they are of or about. The term intentionality was revived by Franz Brentano, one of the most important predecessors of the school of phenomenology” (D. C Dennett & Haugeland, 2012, p. 1)

In essence intentionality is said to be the aboutness of mental states and phenomena such as wants, hopes and beliefs: latent or static charges that a belief or hope may carry, and what those charges appear to want to gravitate towards, that being the thing that it (the belief or hope itself) is about. Notably when this description of intentionality is subjected to closer examination it becomes clear that intentionality comes with the concept of intentional relations, which carry with them some interesting characteristics and attributions that do not follow the rules normally consistent in ordinary relations: Dennett and Haugeland point out that, a belief can be about both real and non-existent entities. The possibility of the inexistence of the object of intentionality, the object that a thought, hope, want or belief is pointing to, is especially relevant in relation to analysis of interactions that took place between research participants and the Articulated Head and how one can improve this human . machine interaction.

“Brentano called this the intentional inexistence of the intentional objects of mental states and events, and it has many manifestations. I cannot want without wanting something but what I want need not exist for me to want it” (D. C Dennett & Haugeland, 2012, p. 2)

Why this quote is important to this investigation is that the intentional inexistence of intentional objects appears to be a plausible explanation for some of the participant behaviours exhibited and observed during

interaction with the Articulated Head. The research data shows that participants certainly hoped, wanted and believed in capabilities of the Articulated Head that did not exist.

8.1.2 Key Concept 2: Combinatorial Explosion

Daniel C Dennett speaks of “combinatorial explosion”(D. C Dennett, 1997, p. 77) in relation to a systems design. To illustrate this, Dennett employs a “thought experiment” (often used by philosophers). Dennett asks the reader to imagine a hypothetical competition between a human and a seemingly hyper-intelligent Martian. The human and the Martian pit their predictive skills and methods, being what Dennett calls the “intentional stance” and strategy of the human as opposed to what he calls the “Laplacean deterministic physical stance” utilised by the Martian, against each other respectively, to see which of them can make an accurate prediction first. Based on the details of an observed telephone call the human and the Martian use their predictive skills to determine what they think will happen as a result of the content of this phone call. The phone call proceeds as follows;

The telephone rings in Mrs Gardners kitchen, she answers and this is what she says: “Oh hello dear. You’re coming home early? Within an hour? And bringing the boss to dinner? Pick up a bottle of wine on the way home then, and drive carefully”. (D. C Dennett, 1997, p. 68)

The human predicts the arrival of a car at Mrs. Gardener’s house with two humans in it, one carrying a bottle of wine. The human makes the same prediction as the Martian – but arrives in an entirely different way (and much faster) leaving the Martian amazed at the apparent intellectual dexterity of the human, who the Martian had previously perceived to be the lesser intelligence. The apparent magical predictive ability of the human amazes the Martian because the Martian is bereft of any knowledge and skills that the intentional stance and strategy purvey. The Martian assumes that the human must have arrived at the same conclusion by calculating all the possibilities and variables that the Martian had calculated – and had done so much faster; therefore the human must possess previously unrecognized

processing powers. Work smart, not hard, is the underlying moral of this tale. However, in relation to systems design and “combinatorial explosion”;

“Increasing some parameter by, say, ten percent – ten percent more inputs or degrees of freedom in the behavior to be controlled or more words to be recognised or whatever tends to increase the internal complexity of the system being designed by orders of magnitude. Things get out of hand very fast….” (D. C Dennett, 1997, p. 79).

8.1.3 Key Concept 3: Combinatorial Reduction

Therefore, it makes sense when designing and redesigning systems to consider the opposite of “combinatorial explosion”(D. C Dennett, 1997, p. 79), here the term combinatorial reduction springs to mind – so this term will be used henceforth. Combinatorial reduction is thus defined as the employ of methods, which achieve design refinements, whilst avoiding or reducing the impacts that combinatorial explosion might have upon their implementation.

Indeed combinatorial reduction is at work in the design of many devices, where restriction of particular parameters is desirable in order to render the devices operation successful in the physical and practical worlds.

Combinatorial reduction as a rule of thumb

Combinatorial reduction as a rule of thumb is a necessary and frequently desirable aspect of both project management and systems design, given the ramifications of combinatorial explosion (D. C Dennett, 1997, p. 79). The Articulated Head was no exception with regard to this rule of thumb, and reference to retrospective combinatorial reductions made to the original operational design perspectives as presented in the grant application E- Appendix 2: From talking heads to thinking heads and in the diagram in Figure 2-4 (Herath, 2012) were implemented, in the regard that it became too difficult to maintain and secure the successful functional status of the Data Logger, Sonar Proximity Client, Audio localizer and the Face Tracker Software, or to implement some of the original plans, due to the size and commitments of the project team involved. Further reference to

combinatorial reduction and its employ is made in the design refinements put forward in section 9.

8.1.4 Key Concept 4: Embodiment

The book, How The Body Shapes the Mind (Gallagher, 2005, p. Introduction) references the seemingly deterministic and certainly influential aspects of embodiment and its role in conditioning us as ‘soon to be’ humans. Once separated from our mothers’ wombs, this conditioning, so powerful has already been, that we can see reflection of our own form in the face of others as soon as we open our eyes, and are almost instantly capable of facial imitation, the smile and so forth. Gallagher goes on to suggest that embodiment is an inescapable fact and that we, as a human brain and mind are first and foremost embodied, and that this embodiment has inexorable and as yet unfathomed consequential influences upon our very nature and existence, of our sense of self! This influence of embodiment is said by Gallagher to go beyond consciousness into the unconscious mind and even into what he terms as the “prenoetic” or “before you know it” (Gallagher, 2005, p. 5). Embodiment certainly shapes and possibly even facilitates the existence of our sense of self; it influences the structuring of consciousness and therefore influences our perception of everything phenomenal and intentional experience included of course. The theoretical viewpoint projected by Gallagher is, as a human being, not hard to swallow, even the prenoetic element, that which we are not yet conscious of. Indeed a few minutes of quiet contemplation and reflection upon our consciousness and sense of self, renders this theoretical perspective as not only likely but seemingly a sure thing.

The associations that embodiment imposes upon a human brain has a very powerful influential, if not deterministic affect upon a humans perception of their lifeworld and the phenomena experienced through their senses. It then follows that this deterministic affect is present and at work in the human . machine interaction under investigation in this thesis.

Embodiment and its ramifications are a critical aspect considered in relation to the evaluation of The Creative New Media Additions detailed in section 6 and evaluated under Theme 12: Auditory Visual Interactive Environment.

Embodiment of the avatar that was the face of the Articulated Head is also important in terms of a human’s perception of it as an intentional agent. Human perception of the performance of robotic devices when observed in action, generally procures the attribution of a largely unintelligent status. If the Articulated Head did not have a screen displaying a face on the end of the industrial robotic arm, and the arm moved randomly or in predefined patterns rather than actually tracking you as the human observer of it, then a similar perception and the attribution of a largely unintelligent status would be likely. However, because the industrial robotic arm was capable of displaying more complex actions and movements in three dimensions, which more realistically resembled aspects of human movement, it was more likely to procure human observers attribution as an intentional agent. The moment the arm begins to track your movement and position as the observer, or performs a specific task of one form or another that requires some complexity, it immediately, from the human observational perspective, became an intentional agent, it appeared to have a mind of its own and could therefore, possibly represent a threat.

8.1.5 Key concept 5: Identification of agency:

Self-preservation is a primal instinct of all living creatures and the will to defend ones existence in the space-time continuum is an overwhelmingly compulsive intrinsic mechanism of all living creatures. To decide upon defensive actions, creatures generally identify moving, and possibly intentional agents, and subsequently attribute the possibility of intelligence and the ability to commit to intentional acts to them, therefore raising their status as possible threats until it has been established otherwise.

Charles Abramson of Oklahoma State University, when discussing the biological criteria for fine-tuning of intentional agency in regard to worms, comments that:

“Only organisms with central nervous systems are capable of fine-tuning their bodily movements for the performance of intentional acts” “Internally generated flexible behaviour appears to be confined to organisms with central nervous systems” (Abramson, 2012)

Whilst Abramson does not concede that internally generated flexible behavior can appear to be exhibited by a pre-programmed agent, with either random or sensor controlled threshold routines, it should be noted that the above extracts were made in the context of reference to biological entities only and it is normal for humans to only attribute the ability of internally generated flexible behavior to other creatures. Where intentional acts of an object of attention other than another creature are perceived by a human observer, the human very quickly concludes that an external, and very possibly human agent (because of the intelligence required) must be involved, that that agent is trying to exact control over the object in question, and that the external agent may also have extensional intentional acts motivating the control of the object of attention in question. The human suspicion of a controlling agent is rather neatly demonstrated under Theme 2: Expectations and enculturation by the children in the Powerhouse Museum who thought I was controlling the Articulated Head.

Abramson notes that “mental states such as beliefs and desires are primarily identified through the performance of intentional acts, which presuppose the notions of trying and control” (Abramson, 2012)

Trying and control as words are an adjective and conjunct noun that are clearly identifiable as characteristics operational in intentional acts, both are synonymous with human behavior, and both can be exacted through given agency. For just one example of this, one can try to convince the human interacting with the Articulated Head, that the Head is interested in the human and is executing an intentional act – by controlling the Articulated Head’s arm and screen position to face the coordinates of the interacting human. This can be achieved by including a stereo tracking camera linked to the Articulated Heads industrial robotic arm motor commands through a

preprogrammed coded interface, as was the case with the Articulated Head’s implementation. However, let us not forget that exacting trying and control through given agency entails the ramifications of system design synonymous with “combinational explosion” (D. C Dennett, 1997, p. 79). To endow any non-human object with given agency that can process and perform the diversity of intentional acts with the apparent consciousness, dexterity, flexibility and speed of a human being is thus far beyond realization through application of science and technology, whilst we may be able to imagine it, we are not able to realize it within the confines of our knowledge of the physical world. The reasons for this are fairly simple; we do not have sufficiently sophisticated sensory apparatus and processing units that can perform at levels and resolutions comparable with the human brain. It is worth noting that the primal instinct for self-preservation appears to gravitate towards other moving and possibly intentional agents, because these are the attributes that a human possesses by virtue of embodiment and a central nervous system, and presumably because a human naturally recognises that these attributes proactively make them a threat to other beings. That is not to say that other threats to human safety and existence do not exist within the environment, clearly they do, it is just that in the hierarchy of active attentional immediacy in relation to this primal instinct, self- protection gravitates towards moving, thus possible intentional agents, first.

This means that movement can easily attract human attention because it activates a human primal instinct that initially attributes the status of ‘possible intentional agent’ to the moving object, but what happens after the immediacy of this attribution to retain human attention. Human abilities taken for granted

Human beings have a tendency to take their own abilities for navigation, decision-making and maneuvering within a three dimensional environments for granted. They can effortlessly avoid physical contact with moving objects, some of which have the potential to harm them such as collisions with cars and other people. They can circumvent obstacles that stand in the way of their path to reaching a destination and can easily recognise elements that are present within the immediate environment. They can identify at a glance

the constitution of many elements that surround them, animal vegetable or mineral. They can also identify many liquids; solids and gases through comparative analysis of data collected by their sensory apparatus in conjunction with the cross-referencing of their memory and lived experiences at lightning speeds. They can conceive, design, build, pick up, move and rearrange three-dimensional objects and remember their proper places in their operational spaces with ease. Humans are incredibly versatile, complex and dexterous entities. Human are nothing short of completely amazing and thus it takes them only a very short period of time to separate attribution of given or predetermined agency from that of agency delivered by an active and present intentional agent in an interactive environment.

Extension of the time period required for human identification and attribution of these differing types of agency is important in relation to the interactions that were taking place between the Articulated Head and its interacting audience because it represents a clear target for one way in which interaction between humans and machines can be improved. User input string data analysis presented in section Theme 1: Anthropomorphism identifies a human proclivity towards anthropomorphism in this interactive environment, hence providing an opportunity for trying to exact some control over the period of time taken by the human to separate attribution of given or predetermined agency from that of agency delivered by an active and present intentional agent in this interactive environment by catering for this anthropomorphic proclivity through the use of suggestive, generative targeted intentional language acts. This suggestion is discussed in more detail in 9.2.6 Dialogue.

The following quote sums up the remarkable demonstrable capabilities of our brain, and poses the question how does it do what it does?

“Now somehow the brain has solved the problem of combinational explosion. It is a gigantic network of billions of cells, but still finite, compact, reliable and swift, capable of learning new behaviours, vocabularies, theories, almost without limit. Some elegant, generative, infinitely extensible principles of representation must be responsible. We have only one model of

such a representation system: A human language. So the argument for a language of thought comes down to this: what else could it be? We have so far been unable to imagine any plausible alternative in any detail” (D. C Dennett, 1997, p. 77).

The point of presenting the quote above is that it nicely articulates the humans language of thought as a key target for investigation, indoctrination and manipulation in relation to extension of the time period taken by the human to separate attribution of these types of agency. Furthermore, since the overtly problematic issues associated with combinatorial explosion in machine development cannot readily be solved by the current ‘state of the art’ in science and technology, a focus on developing the human’s perception of engagement and agency in interaction as opposed to intensive machine development, is clearly highlighted as one of the key ways in which interaction between humans and machines can be improved in such interactive environments. Internally generated flexible behaviors: The source of agency

The human instinctive opposition to the idea that a non-biological object conducting intentional acts can actually be representing generative or internally generated flexible behaviors has enormous implications with regard to participant interaction with the Articulated Head, and indeed to human computer interaction in general because: If the object of attention is not a biological being, then any concept of relationship building in reflective, rather than reflexive interaction, transcends the object of attention by transference, such that the object of attention now becomes the agency and source of that agency rather than the object itself. This point of departure between the object conducting intentional acts and the source of its agency – that which is trying to control it, has profound implications when attempting to answer the big question being:

. How can the interaction between humans and machines be improved? Interacting with what? The giver of agency

Asking another question can highlight the problem with this big question from a semantic and theoretical, if not practical and physical perspective: What

exactly is it that the human in this relationship is interacting with? The human on an intellectual level is, in fact, interacting with the giver or givers of agency to the machine, and not the machine itself – the human is interacting with a third party (another human or humans) through the machine, and the machine itself is the medium through which this interaction is taking place. Naturally the agency present during interaction is limited by the ramifications that “combinational explosion” (D. C Dennett, 1997, p. 79) brings to the system design, and consequently improvement of interaction between the parties involved holds a fairly direct linear and correlative relationship with the scope and limitations of the agency given to that machine. Extension of agency given to the machine

The extension of the agency given to the machine by increasing the number of parameters to be controlled or providing “more inputs or degrees of freedom in the behavior to be controlled or more words to be recognised or whatever” (D. C Dennett, 1997, p. 79) will directly expand the scope of interaction taking place, and will therefore result in perceived improvement in interaction between the human and the machine, regardless of the semantic and theoretical point just made in the paragraph above. Transparency of the medium

Put even more simply, on a very practical, physical, tangible level, without any consideration for the mental states and beliefs that may exist in the human minds that are a party to these interactions: If the machine becomes the medium through which communication between the real generative intentional agents is taking place during in these interactions, then the transparency of the ‘human to human’ communication taking place through this medium, is directly proportional to the transparency (or bandwidth) of the medium itself, therefore combinational reduction is deterministic with regard to the big question:

. How can interaction between humans and machines be improved?

Put in another very practical way: combinational reduction and its deterministic influence upon the scope of any agency given to the machine, defines the scope of any interactions that can take place. That is, it defines

the limitations imposed upon these interactions, thus directly attenuating any scope for improvement of interaction between the human and the machine. This point is true, simply because, the machine is not really the entity at the other end of the interaction to the human in the projected human . machine relationship referenced in the big question.

Given the statements made in the paragraphs above, the Martian in Dennett’s hypothetical scenario, who is imbued with the Laplacean deterministic style physical stance and strategy (or perhaps I should say – limited by combinational reduction because he or she is bereft of the predictive abilities that intentionality affords the human) would no doubt conclude that the only way to improve interaction between humans and machines would be to expand the scope of agency given to the machine exponentially until such time as the machine, being the medium through which communication between the real generative intentional agents is taking place during in these interactions, becomes transparent. The Martian would likely conclude that regardless of the ramifications of combinational explosion, if one conducted enough calculations and adopted ever increasingly ingenious technological design features, then transparency of the medium would eventually be realized and the Martian would therefore set out on his/her quest to solve all the problems of combinational explosion to achieve transparency of the medium.

However, the human, perhaps through reading Dennett’s writings, though more likely because of intuitively recognising from their own experience of existence and practice in both artistic and technical creation, would note that the human brain has solved the riddle of combinational explosion in a way that is incomprehensively complicated, beyond the scope of current human scientific endeavours and manufacturing capabilities to replicate in the space afforded by a football pitch, let alone a skull! The human would very likely conclude that the Martians quest was folly and choose a more simple option. After all, for the human, life is much too short to contemplate setting out on a quest to achieve the seemingly impossible, especially when there is clear evidence in front of the human that the Martian, who uses a

Laplacean deterministic style physical stance and strategy, performing billions of calculations to solve the problems of combinational explosion, whilst simultaneously engineering the transparency of the interactive medium, is in fact, bereft of the ability to identify, let alone understand the consequences of intentionality, which are in fact consequences that are derivatives of the only example of an entity that has demonstrably already solved the problem of combinational explosion, the biological brain.

Naturally the Martian does not believe that there is a simpler option, because the Martian does not understand the human capacity for the perception of, or projection of intentionality from thought and action, especially when it comes to the “intentional inexistence of the intentional objects of mental states and events” (D. C Dennett & Haugeland, 2012, p. 2). Transparency of medium and scope of interactions

There are of course many situations where there is not a direct coloration between the transparency of the medium and the scope of any interactions that can take place through it. That argument would only hold true if each, and all of the entities involved in the interaction were machines, which they are not in this case. Although the ‘transparency of the medium/interaction scope’ correlation holds true in the physical and practical world, it does not hold true with regard to the mental states of the real generative intentional agents being the two (or more) human entities involved in these interactions.

To elucidate this point more clearly, one must first consider the fact that the term agency given carries with it the implicit charge of its installation to the machine, and that the scope of any agency given, is only restricting the scope of interaction on a physical and practical level, not on a semantic or mentalist level, by virtue of the fact that perceived agency can be invoked and evoked rather than just installed, and can indeed, by virtue of the possibility of the “intentional inexistence of the intentional objects of mental states and events” (D. C Dennett & Haugeland, 2012, p. 2) be a non-entity from the perspective of the designers, or supposed givers of agency. It is quite possible for a person interacting with the Articulated Head to perceive agency that was not given by any third party to the machine.

Humans can and do perceive and believe in things that are based on very flimsy evidence with little or no concrete substantiation in the physical or practical world at all.

I make the above point explicitly here because the empirical evidence collected from Video Cued Recall Interviews of participant interactions with the Articulated Head, has clearly identified that some participants did indeed perceive, believe and act upon agency that was not intentionally given to the machine.

The constructed realities of the human mind do not have to be substantiated and scientifically accepted realities of the physical world to be real and true to the mind in which the constructed reality exists.

This point, or opinion, depending largely on your philosophical perspective of what constitutes the current state of play with regards to the explainable universe, relates to this investigation because the empirical evidence collected from participant interactions with the Articulated Head during this investigation has clearly identified that some participants did indeed construct their own realities from the perception of agency that was not intentionally given to the machine.

The degree to which each side of this interplay between levels of the physical and practical as oppose to the semantic and mentalist worlds impact upon the scope of interactions that have taken place in the human . machine interaction under investigation here, is more apparent in the research data with examples including participants believing that Articulated Head could see and hear, think, feel emotions and conduct such activities as flirting.

In a scene towards the end of the Harry Potter film (Warner Bros, 2013b) Deathly Hallows part two, Harry asks Professor Dumbledore the question;

Professor, is this real? – or is this all happening in my head?

To which the professor replies:

“Of course it’s happening inside your head Harry, why should that mean it’s not real?”

This line rather nicely sums up a critical point surrounding performance and perception of it that I have been trying to put across in this thesis

The key concepts presented above that impact the interactions that have been under investigation herein, hopefully now make it clear that both practical and physical restrictions, agency given, the transparency of the medium, intentionality and the various mental states of any, and all of the biological entities a party to interactions, including any actions, events and behaviours perceived or instigated, whether accidental or otherwise, real or imaginary, rational or irrational, reflexive, reflective or reaction-al and relational, that may result from installation, invocation and/or evocation, all have a role to play in the interactions – and therefore impact upon this investigation.

8.1.6 Key Concept 6: Acquisition of knowledge

The investigation of how human . machine interaction can be improved is also influenced by acquisition of knowledge from the data in analysis, for example; there was that that one could establish or deduce from observation of interaction; the Articulated Head turns to face the participant or the participant appears to be losing interest and so forth – but this tells us nothing of what the participant is actually thinking or feeling. Then there is that that one could establish or deduce from what a participant said in the video cued recall interviews; the Articulated Head is getting angry or he is so rude, this is participant declarative knowledge about something perceived and does not require that something to exist in the real practical or physical world, for it to exist in the mind of the one that declares this knowledge. There is also that that can be established or surmised through actions and repetition of actions, including other aspects of interactivity such as the

speed of participant responses to familiar scenarios. These actions typically manifest during interactions where a participant displays repeated actions, or learned behaviours, as the result of enculturation to the specific interactive environment. This procedural knowledge may be implicit to the interaction but not explicit in participant declaration. There is also that deduced and/or induced knowledge, existential experience and constructed realities that the person analysing the research data brings to the analysis table and findings, further complicating the interpretation of data.

So, given all that has been said, though it is not exhaustive of all related theories and theoretical perspectives in existence, it should now be clear that this study had a very broad brief, with the central interactions under investigation defying a reductionist methodology due to the contribution and combinational explosion bought to the investigation by the inclusion of the human minds in the interaction equation. That is to say, the only clear example of an entity that has demonstrably already solved the problem of combinational explosion, the biological brain, brings unto the investigation the same problem that it alone has solved.

This point, with hindsight may appear obvious to you as the reader, if so, then good, because this point defined the chosen methodological approach (phenomenology) and vouchsafes the methods adopted during this investigation and also frames the validity of any findings firmly within the realms of interpretation of phenomena. Correlations between types of phenomena experienced during interactions, the frequency with which they occur, and interpretation of the conditions that appear to support manifestation of that particular phenomena, whether this be with just one participant or across a range of participants, these are the main features of focus for this investigation

It comes as no surprise then that Franz Brentano, was “one of the most important predecessors of the school of phenomenology” (D. C Dennett & Haugeland, 2012, p. 1). The potential for “intentional inexistence of the

intentional objects of mental states and events” (D. C Dennett & Haugeland, 2012, p. 2) is a uniquely located phenomena, known only to exist in the biological brain of creatures of consciousness. Therefore, since phenomenology, as a methodology, is focused on the phenomena of consciousness, its status as an appropriate candidate for employ in this investigation is thus ratified.

So, what exactly do we need to know in order to address the big question that we are trying to answer in this study?

1. We need to identify any phenomena manifest during interactions. 2. We need to identify conditions that appear to support manifestation of these phenomena. 3. We need to establish whether the phenomena identified, appears to have a positive of negative influence upon human . machine interaction 4. Then we need to consider the conditions that appear to support manifestation of any particular phenomena, and how those supporting conditions might be increased or decreased in frequency during interactions, according to whether the phenomena’s influence on interaction has a positive or negative effect upon engagement.

“One doesn’t reduce Turing machine talk to some more fundamental idiom; one legitimizes Turing machine talk by providing it with rules of attribution and exhibiting its predictive powers. If we can similarly legitimize “mentalistic” talk, we will have no need of a reduction, and this is the point of the concept of an intentional system. Intentional systems are suppose to play a role in the legitimization of mentalistic predicates parallel to the role played by the abstract notion of a Turing machine in setting down rules for the interpretation of artifacts as computational automata”(D.C Dennett, 1989, p. 67)

Legitimisation rather than reduction is part of the role of an intentional system.

Since all the entities active in the interactions that took place between an audience and the Articulated Head were intentional systems of one form or

another, what we really need to know is what was actually taking place during these interactions, and what are the human perceptions of participants during these interactions – so that we can find empirical evidence that effectively legitimises the apparent conditions under which occurrences of phenomena experienced proliferate, and then adjust the intentional systems to proliferate positive influential phenomena of engagement, whilst obliterating negative influential phenomena of engagement, thereby improving human . machine interaction.

8.1.7 Key Concept 7: The beaconing of succor

Terrel Miedanger in a short story called “The soul of the mark III beast” discusses how a human in the story does not want to kill a robot beetle. The human is given a hammer and asked to smash the small moving beetle. The human feels it is not fair to the little beetle to smash it. The human reluctantly hits the beetle once with the hammer and the beetle begins to wince and make a noise as if it were in pain, the human is now even more reticent to strike it again. The theory that a machine can beacon succor earning some empathy from the human assassin in the story is interesting, because it represents another possible lever for increasing audience engagement, it is not only that some empathy maybe earned, it is also that the threshold of this empathy giving is different for each human individual. Some individuals might find it easy to destroy the beetle whereas others may have a real problem.

This empathy felt by humans is another aspect of the anthropomorphic stance. Some would not buy the beaconing of succor and have no qualms whatsoever wheedling the hammer of destruction with vigor, possibly even reveling in the act, but many humans do appear predisposed to feel empathy. For one example; my wife, when proof reading the reduction presented in section 3 of this thesis, expressed feeling sorry for the robot because it was devoid of the senses that people hoped, wanted and expected it to have. One way or another the concept that empathy might easily be evoked in the human, raises consideration for it as a lever for

increasing audience engagement as is discussed in section 9.2.7 Engagement.