Avatars, Agency and Performance: The Fusion of Science and Technology within the Arts
Richard Andrew Salmon 2014
6. The Creative New Media Propositions
It was proposed by the Thinking Head performance team that the development of some creative new media work to augment the Articulated Heads auditory visual environment should take place and be evaluated in parallel with the collection of Video Cued Recall Interview data during this investigation. A lot of technical and programming work went into the creative new media additions to the Articulated Head exhibit; some of this work is cited in the text that follows. I would not want the breadth of this work to go unnoticed. However, the technical execution of much of this creative new media work is in many ways (perhaps) peripheral to the core research detailed herein, therefore a separate document, which outlines more of the technical execution of these creative new media additions, is included electronically as; E-Appendix 11: Creative New Media Additions Documentation. It should be noted that implementation of these creative new media projects, and the findings from evaluation of their effectiveness, has provided a convincing endorsement of conclusions (see section 7 & 9) emerging from the core research activities. Indeed the implementation of these new media projects, and more specifically their evaluation, helped to identify the key barriers to interactive engagement in relation to the human . Articulated Head interaction that was under investigation.
6.1 The Additions
The creative new media propositions were conceived in general alignment with one of the objectives of the original grant proposal: E-Appendix 2: From Talking Heads to Thinking Heads being; to allow researchers to examine key questions in individual research areas, which could not have been addressed without the implementation and development of the Thinking Head.
The key question/s related to an individual research area that were of particular interest were;
1. Investigation of modes of auditory/visual media delivery in a three dimensional spherical interactive environment including an;
. Investigation of tools and techniques for the presentation of contextualising visual media in interactive environments.
. Investigation of multichannel spatial audio tools and techniques for the delivery of contextualising three-dimensional sound representation to audiences in interactive environments.
With the above study aims in mind two creative new media augmentations to the Articulated Head’s interactive environment were proposed:
1. Projection of text – the project was given the title ‘The Thought Clouds’ 2. A Spatial Auditory System
Section 3 presented a reduction comparative analysis of human . human, compared to human to robot communication, which gave a deeper insight into the existing nature of interactions that were taking place between the Articulated Head and its audience. Conditioning, agency, embodiment and engagement in relation to the senses became foci in the reduction. It was established that there was an absence of apparatus afforded the Articulated Head in terms of its ability to capture and analyse information in the sense-domains of smell, taste and touch. There was some possible, but l limited scope for provision of apparatus to affect audience engagement through stimulation of their sense-domains of smell, taste and touch in future human . machine interaction, more of which is discussed in Section 9 – Senses Related.
However, the reductions quickly established that the success, or otherwise of the avatars virtual performance, was ostensively entwined within the auditory and visual domains. As such, presentation of contextualising media in the auditory visual domain made sense as a primary target for enhancement of audience engagement. Very much the same as was identified in the analogous performing arts scenario: The Stage presented earlier in Section 5.3, the conditioning affect of the stage set, props and sounds surrounding the virtual performer (the Articulated Head), should have been able to act as a catalyst for catalysis of the audiences experience of ‘other’ (an existential being) present within the interactive environment. The stage set and props (artificial or otherwise) should have been able to make the virtual performer’s performance more believable.
The initial expectation was that implementation of the propositions would manifest in a deeper more immersive and engaging interactive experience being reported by the audience interacting with the Articulated Head. It was identified early on in the brainstorming of these project ideas that the real challenge with regard to achieving the desired outcomes (illusions) from media delivery in the auditory/visual environment surrounding this avatar . audience interaction, would not necessarily be dependent on achieving high accuracy, realism or speed of media delivery in these domains, but should be far more focused on achieving the desired effect – greater audience engagement.
Returning to The analogous performing arts scenario: The Stage once more, the stage set and props do not have to be real in order to achieve catalysis of the projected narrative in portrayal of the performance. In contrast to programming languages and their associated technical, mathematical and scientific goals, which drove the operating system of the Articulated Head, the challenges presented here were artistic in nature with agency, affect and engagement considered more important than accuracy and perfection in the details of technical and creative design related to these projects. I.e., if a desired aesthetic goal can be reached by simple means, without specific attention to technological detail and accuracy then those simple means should be utilised.
6.2 Developmental Stages and Logistical Impingements
The Articulated Heads auditory and visual augmentations were implemented in developmental stages in parallel with the collection of Video Cued Recall Interviews, which were conducted across the three exhibition spaces that the Articulated Head was exhibited in during this study: The New Instruments for Musical Expression (NIME) (“NIME International Conference,” 2014) exhibition in June/July 2010, The Somatic Embodiment, Agency & Mediation in Digital Mediated Environments (SEAM) exhibition (“SEAM2010,” 2012) in the Seymour Centre (“The Seymour Centre,” 2010), central Sydney in October 2010 and the Powerhouse Museum (PHM) (“Powerhouse Museum | Science + Design | Sydney Australia,” 2012), Australian Engineering Excellence Awards Exhibition (AEEAE) from February 2011 to November 2012.
This study was conducted between April 19th 2010 and April 18th 2013. The first of the three exhibitions that the Articulated Head was exhibited at during this study, the NIME exhibition, took place in June 2010 before any ideas for new media augmentations to the Articulated Head’s interactive environment were solidified. Initial findings based on observation of the Articulated Head in operation and Video Cued Recall Interviews collected at the NIME exhibition (prior to any detailed qualitative data analysis and node coding was conducted) contributed to the development of the ideas development for the augmentations. The SEAM exhibition was held in October 2010 and by this time the concept for both the projections and the spatial auditory system had been discussed, but the development of both projects were still very much in their infancy. The equipment required to implement the projects was being purchased at the time. A prototype version of the initial projection idea as detailed in Section 6.9 The Projections were tested at the SEAM exhibition but no Spatial Audio System had been developed at this point.
Therefore, most of the development and testing of these creative new media projects took place with the Articulated Head exhibited in the Powerhouse Museum.
6.3 Cost Implications and Technical Constraints
There were cost implications, aesthetic artistic considerations and a range of practical and technological limitations, which both constrained the Articulated Head’s performance capabilities and restricted the number of developmental avenues that were likely to prove fruitful in terms of “conditioning affect and engagement” within the timeframe of this study, and more particularly the Articulated Head’s residency at the Powerhouse Museum (“Powerhouse Museum | Science + Design | Sydney Australia,” 2012), because this was where most of the developments to the auditory visual environment took place.
6.3.1 Outline of the cost implications
A full breakdown of the equipment and costs is presented in Appendix 3 (see Table 12.1,) The cost implications related to the creative new media developments proposed, and the concerns they may have raised, were partially dispelled by the plug and play nature of the creative additions, and the adaptability and transferability of the resources required to facilitate the audio/visual work. If all the hardware and software resources required for the auditory visual additions (other than some of the wiring) were retrievable from the Articulated Head installation at the Powerhouse Museum once the exhibit was dismantled, the equipment would subsequently be reusable for other applications within The MARCS Institute (MARCS, 2012); therefore the unrecoverable technology and equipment costs of the creative new media projects to the Institute would in fact be relatively small.
Some aesthetic artistic considerations, and practical or technological constraints proved to be persistent barriers to improvements of the Articulated Head’s virtual performance that were, at least to some degree, irresolvable within the timeframe of this study. Examples include;
. The space available for a projection area around the exhibit . The constraints on speaker mounting positions for the Spatial Audio System . The support for testing and implementation, accuracy and usability of automatic speech recognition technology in exhibition environments . The problem of deciding what the Articulated Head appears to pay attention to when surrounded by crowds of people
However, many of these aspects are important, and do clearly feature in section 9; The Blueprint of Emergent Recommendations that has been derived from the findings from this investigation.
6.4 Project links to agency, affect and engagement
The following table was created in the planning stage of these projects and is provided to make explicit how the creative project outcomes were intended to link directly to agency, affect and engagement.
Project 1 –Textual projections
Project 2 – Spatial Auditory Cues
The sprites, which display text, should change according to User and/or Chatbot text string input to the projection system, derived from the current conversational foci of interactions taking place. The word associations relayed in the projections should demonstrate some semantic correlation with user input and thus promote suggestive evidence of thought and consciousness of the avatar, the result of which should be perceived agency within the eyes and minds of the audience interacting with it.
Spatial audio should be used in two ways here; (1) the system should be used to make the voice of the avatar follow the position of the face on the screen on the end of the robotic arm.(2) THAMBS moods/modes and co-ordinates for centre’s of attention should be used to control and affect spatial auditory information giving the Articulated Head’s audio output agency in relation to its attention model.
The affect expected from this projection initiative was that the audience interacting with the Articulated Head would experience a heightened sense that the Articulated Head showed evidence of thought and cognitive processing related to its responses to user input during interactions.
The affect expected from these auditory initiatives was that when an audience interacted with the Articulated Head, they should have experienced enhanced attention to their presence as the Articulated Head’s voice and other spatial auditory cues would be spatiotemporally and contextually aligned to their immediate experience
This projection initiative was expected to provide enhancement of plausible cognitive links between user input and Chatbot output, therefore increasing engagement of the audience by providing a more cohesive conversational experience.
It was expected that an audience interacting with the Articulated Head would report enhanced engagement, because the Articulated Head’s voice and other spatial auditory cues would be spatiotemporally and contextually aligned to their immediate experience.
Table 6-1 – Project links to Research Questions
To summarise the above table: the creative projects, designed to embrace the ways in which the avatar already had agency, were multisensory and could be multi-present. Furthermore, the projects sought to enhance the ways in which characteristics of the virtual performer condition affect and engagement, therefore directly advancing propositions for evaluation through the qualitative data analysis methods of investigation adopted during this study. The evaluation of these projects is detailed under Theme 12: Auditory Visual Interactive Environment.
With reference to placement of the robots voice in the spatial auditory system as cited in the table above, one might ask, why not mount a loudspeaker at the screen position on the end of the Articulated Head’s industrial robotic arm but a loudspeaker large enough to project the voice at high quality to the audience would have been heavy and put extra strain on the robotic arm. Furthermore because the Articulated Head was surrounded by a high walled glass enclosure the internal acoustic nature of the enclosure would render the results of said speaker placement muffled and unclear to an audience present on the other side of the glass enclosure due to reflected sound waves, standing waves and the associated deterioration of the sonic image presented.
6.5 Projected data-flow diagram
The flow diagram in Figure 6.1 below shows the projected plan for data flow between the Articulated Head and the new media augmentations proposed.
Chatbot output stringUser typed input stringEvent Manager TCP TransmitEvent Manage TCP ReceiveMax/MSP TCP TransmitChatbot output stringDelayMax/MSPTHAMBS VariablesEvent ManagerText to speech engineSPAT DBAP & ViMiCChatbot output stringXYZ Cartesian Variables16 Channel DBAPInputOutputOutputInputsInputsRelated Text StringsFor displaySpatial Audio + Voice OutputMOTU UnitsProjectionsO/PThe Articulated HeadKioskMax/MSP TCP ReceiveUser typed input stringTHAMBSProcessingProcessing Display CodeProcessingTagging Data tomanipulateprojectionsand spatial audiosoundscapeEmotion TaggingModuleProcessing WordNetModuleChatbotSuggested Flow Diagram of data communications for the AH event manager and implementation of text clouds and spatial audio projects
Figure 6-1 Data flow diagram for Creative Media Augmentations
Much of the creative additions technical work documented in this section was developed using Max/MSP/Jitter, now known as simply Max. Max is a computer based, cross platform, object oriented programming environment, which allows the user to link objects. Objects have inputs and outputs and linking is achieved using virtual cables. Linking the objects is executed in what are known as patcher windows on computer screen. A patcher window is the place where patch development starts within the Max environment. A patch may contain many objects and can also contain embedded patches, which open in their own patcher window. When discussing Max objects in the following description of the technical development of this creative new media work (including any linked documentation), any Max object name present in the text will be enclosed in [name] brackets. Development descriptions that follow do not include a detailed explanation of every object present within a patch, or every patcher embedded within any other patch; rather the development descriptions are aimed at presenting an overview of the functionality that Max and the other software technologies utilised in the development of these works, delivered to the projects.
6.6 Overview of the Hardware Equipment Set-up
A simplified overview of the equipment set up for the creative new media augmentations is given in Figure 6-2 below.
Figure 6-2 Simplified Augmentations Equipment Set-up
Figure 6-2 above belies the complexity of the set-up for the auditory visual augmentations installed and implemented in the Powerhouse Museum. In reality there was a large number of soldered connections and wiring to inlay and install alongside the set-up and construction of the rest of the exhibit.
Construction of the exhibit enclosure and installation of all the equipment, and the Articulated Head itself, took a team of people approximately three weeks to complete. A small short low quality iPhone slide show of images displaying some of the stages of the exhibit construction is included as an electronic document for reference as E-Appendix 12.
The following two images help to illustrate the complexity of just one part of the set-up; the wiring up of the 19 inch rack mounted equipment in the left- hand cupboard located inside the Video Cued Recall Interview Laboratory area as indicated on the plan view diagram of the exhibit, Section 12.1.1 Appendix 1.
Figure 6-3 Installation of wiring: Left hand lab cupboard prior to hardware installation
Figure 6-4 Left-hand lab cupboard after hardware installation
6.7 Introduction to the Software Technologies Used
A detailed description of all of the programming of the software technologies utilised during the development of these creative new media works is not necessary to understand the outcomes. So where possible, flow diagrams and illustration have been used to impart the overview expediently.
However, the following developer sourced descriptions of the software technologies used, along with a brief description of the technologies contribution to this work are to help describe the Projection and Spatial Audio System outcomes in E-Appendix 11: Creative New Media Additions Documentation.
Max/MSP is now known as simply Max with the release of Max5. “Max has been used by performers, artists, and composers to make cutting-edge work by connecting basic functional blocks together into unique applications.
Max gives you the parts to make your own music, sound, video, and interactive media applications. Simply add objects to a visual canvas and connect them together to make noise, experiment, and play”. (Cycling 74, 2012). Max has been used extensively in the creative new media projects providing interfacing and text string management for the projections, WiiMote and iPhone interface control using the C74 object and OSC protocol.
Max’s main contribution to this work
Max Patches have played a central role in both the Spatial Audio and the Projection projects that have been implemented and critically evaluated within the context of the Articulated Head’s interactive environment.
“Jamoma, a platform for Interactive Arts-based research and performance.
Jamoma is a community-based project, dedicated to the development of several toolsets and frameworks for artistic creation through digital means.
The project is led by an international team of academic researchers, artists and developers, and supported by several institutions” (“Jamoma.org,” 2012).
Jamoma’s main contribution to this work
Jamoma was utilised for the creation of a Vector Based Amplitude Panning Spatial Auditory System that was implemented as a prototype on the opening evening of the Articulated Head’s exhibit in the Powerhouse Museum. The Vector Based Amplitude Panning prototype was later upgraded to a Distance Based Amplitude Panning model using the SPAT objects (“Spatialisateur,” 2012).
6.7.3 Ircam SPAT objects for Max
Ircam SPAT (“Forumnet,” 2012) provides a number of objects for extension of the Max-programming environment, which extend facilities for implementation of a range of different spatial auditory configurations.
Ircam SPAT’s main contribution to this work
Ircam SPAT provided the main tools for creation of the Distance Based Amplitude Panning configuration for the Spatial Audio System implemented in the Powerhouse Museum.
“A new set of modules called VIZZIE to help you create your own unique video programs right away. VIZZIE makes putting it together fun and gets you from start to finish in record time” (Cycling, 2010).
Vizzie’s main contribution to this work
Vizzie modules were utilised in this work for the purposes of creating/mixing various visual effects within the projections.
Osculator provides a software link between devices such as a WiiMote, iPhone and several other hardware controllers to video and audio software on a computer. Osculator supports the Open Sound Control Protocol, which makes it easy to interface and send messages between devices and software parameters for control purposes.
Osculator’s main contribution to this work
Osculator worked as the interface between the WiiMote’s and Max.
6.7.6 Open Sound Control
Open Sound Control is a simple network control protocol. “This simple yet powerful protocol provides everything needed for real-time control of sound and other media processing while remaining flexible and easy to implement.
Open Sound Control (OSC) is a protocol for communication among computers, sound synthesizers, and other multimedia devices that is optimized for modern networking technology. Bringing the benefits of modern networking technology to the world of electronic musical instruments, OSC’s advantages include interoperability, accuracy, flexibility, and enhanced organization and documentation. There are dozens of implementations of OSC, including real-time sound and media processing environments, web interactivity tools, software synthesizers, a large variety programming languages, and hardware devices for sensor measurement.
OSC has achieved wide use in fields including computer-based new interfaces for musical expression, wide-area and local-area networked distributed music systems, inter-process communication, and even within a single application. (Osc, 2012)
Open sound Control’s main contribution to this work
OSC worked as the protocol used to communicate between Osculator and Max for the “WiiMote Patch” and also between Max and Director over a UDP connection.
6.7.7 The CNMAT Library
The CNMAT Library (“Center for New Music & Audio Technologies (CNMAT),” 2013) is a group of Max objects developed by the Centre for New Music and Audio Technologies, University of California, Berkley, which expands the Max programming environments capabilities with extended support for Open Sound Control amongst other things.
CNMAT Libraries main contribution to this work
CNMAT objects have been utilised in various Max patches throughout this creative new media work. The OSC CNMAT objects were a notable contribution to this work.
6.7.8 The [c74] Object
The c74 object is one of a number of solutions that allows you to dynamically create user interface objects on your iOS device from within Max via a networked connection, this is very useful because these user interface objects can pass control information to and from Max allowing for remote control of Max patch parameters on one side of the network whilst also allowing for dynamic control of user interfaces on the iOS device of your choice. Furthermore utilization of the OSC protocol allows Max control of a vast array of other networked hardware and software parameters over Bluetooth, WiFi, and Ethernet (TCP/IP).
[c74] Max object’s CNMAT Libraries main contribution to this work
The [c74] object was used to interface an iPhone remote control surface with Max to control variable parameters related to the creative additions.
“Processing (Processing, 2012) is an open source programming language and environment for people who want to create images, animations, and interactions. Initially developed to serve as a software sketchbook and to teach fundamentals of computer programming within a visual context,
Processing also has evolved into a tool for generating finished professional work. Today, there are tens of thousands of students, artists, designers, researchers, and hobbyists who use Processing for learning, prototyping, and production (Processing, 2012)”
Processing’s main contribution to this work
Processing was utilised to create the main Thought Cloud projections.
You can “explore exciting new dimensions in the award-winning multimedia authoring environment of Adobe® Director® software and Adobe Shockwave® Player. Create engaging, interactive games, applications, simulations and more for the desktop, kiosks, DVDs, CDs, and the web.
Robust, flexible authoring and a streamlined workflow deliver a greater return on your creativity” (“On Location (software),” 2013).
Director’s main contribution to this work
Director was programmed to display the sentence and words prototype projections used at the SEAM Symposium and was also utilised for the dripping text projections shown at the Powerhouse Museum.
“TeamViewer – the All-In-One Software for Remote Support and Online Meetings. Remote control any computer or Mac over the Internet within seconds or use TeamViewer for online meetings. Find out why more than 100 million users trust TeamViewer” (“TeamViewer,” 2012).
TeamViewer main contribution to this work
TeamViewer was an invaluable tool during this project allowing for remote management and maintenance of the Spatial Audio System and projections.
Furthermore it allowed for complete remote control of the computers – so was useful for remote connection to the Articulated Head’s computers by the engineering team and it also allowed for a visual check of the exhibits operation remotely by accessing the data streams sent into the computers from the video cameras.
6.7.12 Alchemy Player
The Alchemy Player (“Alchemy Player,” 2013) VST plugin was utilised for creation of an ambient morphing soundscape, generated from within the Spatial Auditory System in Max. A range of other robotic sounding samples including motors, electrical sounds and other electro mechanical samples were used as sound bites delivered to the audience in various perceptual positions within the Distance Based Amplitude Panning spatial audio field using the aforementioned Ircam SPAT (“Spatialisateur,” 2012) objects in Max.
6.8 The Creative New Media Projects Development
E-Appendix 11: Creative New Media Additions Documentation displays images related to technical Max Patch implementations. Each image presented includes a brief description of the operation of the patch shown and its contribution to the operation of the projects. A short commentary relating the creative additions follows under the next few headings.
6.8.1 The Projections
Existentialism – the ability of the Articulated Head to act at will or at the very least, appear to act at will to a person interacting with it was considered at the outset of these projects to be essential to that persons sense of other rather than object, robot or it. I.e., perceived acts of will (whether false or otherwise) equal ‘Other’. The turn of phrase from ‘it to him’ captured in a research participant’s Video Cued Recall interview dialogue at the NIME conference (see; Key Evidence Links Table), was a very clear example of a person interacting with the Articulated Head who appears to have subconsciously attributed it with the status as other rather than object, robot or it.
The projections aimed to develop the avatar so as to give its audience an enhanced sense that it possessed consciousness, that it exhibited some evidence of thought and memory related to the current conversational foci taking place during the interaction, so as to give the audience a sense of the presence of an – ‘other’ – the presence of a conscious existential being as an active partner in the interaction. It was hoped that projection of words, either behind or in front of the Articulated Head, displayed on a mounted projection screen of some form, probably using frosted projection film would help to achieve the above aim.
Text was to be gathered from both user input and Chatbot output with the intention that Chatbot output to the voice synthesis engine, which allowed the Articulated Head to speak words out loud, would sometimes be delayed momentarily whilst the previous Chatbot output and subsequent User input text strings were displayed on the projection screen. In practice the insertion of the specified delay proved to be unnecessary because some endogenous variable latency of the Articulated Head’s system operation was already present. It was expected that this simple projection idea would give the audience the impression that the Articulated Head had thought about what it was going to say before saying it.
A possible extension to the simple idea above was that text strings from current conversational foci could be used to derive similes, synonyms, antonyms and other related words for extraction and display from a database – this would allow for the projection of thought clouds – displaying text with relational connections to current conversational foci. It was expected that this would provide stimuli for the audience – food for thought so to speak. It was intended that these stimuli would trigger identification of plausible cognitive links between user input and Chatbot output strings within the audiences’ mind. That is, the audience would imagine variations of possible links between what had been input and the Chatbot’s output strings, based on perceived relational connections between the words displayed on the projection screen, therefore enhancing their perception of connections between conversational foci. The hope was that this trick would enhance engagement of the audience by providing a tangibly more cohesive conversational interactive experience.
In order to achieve the above ideas, consideration of the average audience vocabulary came into question. A paper in a Journal of Literacy Research (Zechmeister, Chronis, Cull, D’Anna, & Healy, 1995) indicates that the receptive size of a college-educated native English speaker is about 17,000 word families, about 40% more than first year college students, who know about 12,000 word families.
Given the relatively small vocabulary used by most people, the fact that only some of the words they use would provide satisfactory relational word output in projections, and the fact that conversation with the Articulated Head was dramatically constrained by interaction time and the context of engagement, it was thought that a fairly limited database thesaurus of relational words would be sufficient. When considering accessible online database options for production of relational word strings, on the advice of Professor Chris Davis word.net (Princeton, 2012) in particular became of interest. However, it was also considered possible that a faster and more effective database of similes, synonyms, antonyms and the like could be constructed manually from a recorded set of user interaction text strings. This possibility although provisionally identified, was not in fact employed in the construction of the thought clouds in the end. Nevertheless, there are some interesting observations emergent from section 7.1 Text String Input/Output Analysis that lend considerable weight to the idea of employing a variation of a simplified, manually constructed text string database. The suggestion for this variation is carried through into the blueprint of emergent recommendations from this investigation presented in Section 9.1.12 The Auditory Visual Interactive Environment.
The projections could have been developed aesthetically in a number of different ways, but initially the project was constructed using Cycling74 Max and Adobe Director. It was recognized that networking information between Max and Processing could yield more adventurous and pleasing artistic and aesthetic possibilities and so further developments were planned in the Processing (Processing, 2012) direction.
The Projections at SEAM
At the SEAM exhibition the first prototype of the text projection idea was developed, using Max as the software to interface text information between the Articulated Head’s Event Manager and the projections. Max passed the text strings to Adobe Director sprites for display. Open Sound Control messages were sent from Max to Adobe Director to control the sprites positions in the projections. Figure 6-5 below shows the projections and the Articulated Head set-up at SEAM.
Figure 6-5 The Articulated Head Set-up and Projections at SEAM
In the Figure 6-5 above, you can clearly see three glass panels (approx. 1.25m wide by 2m high). There were eight Aluminium framed glass panels in total, which together formed an octagonal protective surround for the Articulated Head. The enclosure shown here was the same enclosure that was used at the NIME exhibition. The only difference here is that you can see three panels to the front of the enclosure displaying the presence of frosted projection film on the lower half of the panels. The film displays some multicoloured words (sentences), derived from user input or Chatbot output text strings. The projectors (three of them), are mounted on the ground inside the enclosure below the extended reach of the Articulated Heads arm so as to avoid damage.
Section 6 – The Creative New Media Propositions 117 Execution of the prototype Text Projections Simplicity was a necessity with the projections because initially there was only a little time for programming them before the SEAM exhibition. The Articulated Head’s Event Manager required modifying in order to make provision for passing text strings out to Max. The format for passing text strings out was agreed quickly and the prefix-delineators of a 1 to identify User input strings and a 2 to identify Chatbot output strings was implemented.
Development of the Event Manager/Max interface was limited to passing text strings out from the Event Manager at this point in time, no provision had yet been made for the passing of text strings or any other data from Max back into the Event Manager.
The Articulated Head enclosure as seen in Figure 6-5 utilised one of its panels (to the rear in the picture shown) as a door. The door was normally locked. If the door was opened then a switch was tripped to automatically stop the Articulated Head’s robotic arm from moving for health and safety reasons.
Once the trip switch had been activated then a full restart of the system, which took several minutes, was required to get the Articulated Head up and running again. Naturally it was desirable to keep the door locked, as extended periods of downtime in an exhibition environment is embarrassing.
Diagram 6-6 below indicates the flow of text string data from the Event Manager through to Max via an Ethernet cable. The Data was then passed from Max to Director via the Director OSCxtra (“Adobe (Software),” 2012). The director stage was designed to be 2400 pixels long by 600 pixels high so that the stage could be spread evenly across the entire display area of three projectors (800 by 600 pixel displays each). Figure 6-7 indicates the extended screen layout that the KVM switch module facilitated for the projections. The Director file was designed to display sixteen text sprites placed in various positions on the stage so that the average sentence lengths gathered from the event manager filled the stage area appropriately. The text sprites would randomly display the last sixteen text strings derived from either user input or the Chatbot output. Various display parameters of the text sprites on stage could be controlled, such as colour, size and rotation.
Section 6 – The Creative New Media Propositions 119 Some experimentation with the size, color and placement of text in the projections to give an aesthetically pleasing result was required. Therefore the design of some sort of remote control system for the prototype projection was needed. To do this an OSC Max patch called the “WiiMote Patch” designed by Kim Cascone, December 2007 (firstname.lastname@example.org) sourced from my principal supervisor Assoc. Prof Garth Paine was utilised for the purpose of passing control information from the WiiMote through Osculator to the WiiMote Patch. The WiiMote patch was given [send] objects to pass the WiiMote parameters on to the Projections Patch. The Director OSCXtra collected data from the Projection Patch to control sprite parameters. Figure 6-6 a & b and Table 6-3 below indicate the data flow and connection protocols employed; Figure 6-8 Director projection control a) Projections Data Flow and Protocols b) [Send] objects (WiiMote Patch . Projections Patch)
Section 6 – The Creative New Media Propositions 121 the text storage and release in the dripping text projections. Figure 6-10 below shows the same patch unlocked.
Figure 6-10 Director projection embedded patcher window 6.8.3 The Creative Additions at the Powerhouse Museum Prior to installation in the Powerhouse Museum, the Articulated Head made only rudimentary use of spatio-temporal auditory visual contextualising cues for the enhancement of its interactive performance. The aforementioned spatial auditory systems, and the projections titled the Thought Clouds by Associate Professor Garth Paine, were implemented in the Powerhouse Museum.
The Spatial Auditory System Initially, Genlec 6010A bi-amplified active loudspeakers (“Genelec 6010A is an Extremely Compact Two-Way Active Loudspeaker,” 2012) provided mono sonification of the Articulated Head’s phonetic audio sample library to the interacting audience. Expansion to a three dimensional spatial auditory system was proposed for three explicit reasons; 1) For psychoacoustic placement of the robots voice in the interactive environment, with the intention that the voice of the robot would appear to emanate from the mouth of the head present on the screen, regardless of the screen position within the interactive environment. The rationale for this was that engagement would increase with a sense that the voice and the visualisation of the head were co-located, as is the nature of a real human being moving around in the interactive space. 2) For the presentation of other contextually relevant spatial auditory cues, which link visual movement to features of auditory output. 3) For the sonification of a complimentary soundscape, the intention being to create a more intimate auditory environmental experience for patrons of the Powerhouse Museum in the surroundings of the exhibit. To give the exhibit it’s own individual and separated, yet still contextually linked auditory space in relation to the wider museum environment surrounding it.
The plan for presenting contextually relevant spatial auditory cues was to map specific features of synthesized auditory material such as the embodied conversational agent’s voice or parameters of a particular sound patch such as the cut off and resonance filters and/or a panning control to robotic arm position co-ordinates and movement. Coordinates were to be provided to the spatial audio system by the Event Manager/Max interface, which was specifically programmed for the task of passing data out from the Event Manager to Max by Zhengzhi Zhang. Linking physical or visual movement to correlating complementary auditory output is known to be effective in solidifying events into one unified percept (Lewald, Ehrenstein, & Guski, 2001) The plan for the soundscape was to contextually link the themes of robots, films and space. These themes were chosen, as they were prevalent subjects of interest arising in audience discussion and information exchange in relation to, and in interaction with the Articulated Head. Robots, films and space also featured strongly in many other exhibits throughout the Powerhouse Museum. Use of sounds such as those generated by the robotic droids R2D2 and C3PO in the film Star Wars (Lucasfilm, 2012) mixed with synthesized sounds similar to those used in science fiction films such as Star Trek (CBS-Entertainment, 2012) and mechanical sounds, such as those generated by clockwork winders, small Meccano (Meccano, 2012) motors and alike, were employed within the spatial audio composition, with the express intention of promoting a sense of intertextuality (Kristeva, 2002)
Section 6 – The Creative New Media Propositions 123 between these themes within the minds of patrons visiting the exhibit.
Intertextuality means the intermingling or the shaping of one text’s meaning with another text and it applies equally to associations made between features of audiovisual content. Parenthetically Star Trek is a favourite of Stelarc’s and the Articulated Head often referred to Star Trek when asked “what do you like to watch on TV?” or “what is your favourite film?” The Thought Cloud Textual Projections The Articulated Head was promoted as a Thinking Head and not all of its responses to user input were immediately logical or made proper sense within the context of the conversation taking place, therefore the employ of intertextuality within the visual projection domain was considered.
Intertextuality can be intentional or accidental, designed to convey a specific message to an audience, who perceive the message as intended – or a message perceived by an audience, which may not necessarily be the express intention of the writer but is more a construct of self interpretation.
Therefore, it was proposed that visualisation of contextualising cues in the form of similes, synonyms, antonyms and other words, related to either user input, or the responses of the robot’s embodied conversational agent, would further enhance the illusion that the Articulated Head was indeed thinking, that a thought process of deduction was actually taking place.
The proposition was that relationally linked words would make for a more cohesive interaction by extending the number of stimuli for audience perception. The plan was to present the words from a cloud of jumbled text characters, hence the name “Thought Clouds”. NextText (“NextText,” 2012) is a Java library for building applications to display dynamic and interactive text-based applications and NextText for Processing (Processing, 2012) is a port of the library for the Processing development environment. An example file provided with the library called “WhatTheySpeakWhenTheySpeakToMe” – is an interactive artwork by Jason Lewis and Elie Zananiri, with the Processing port by Elie Zananiri | Obx Labs | June 2007. This code formed the starting point for the programming of the Thought Clouds. Dr Richard Leibbrandt from Flinders University, Adelaide, South Australia took on the task of programming the Thought Clouds.
Some alternative projections were required to work as a placeholder in the Powerhouse until the Thought Clouds were ready. The plan was to extend the SEAM projections using Adobe Director “Adobe (Software)” 2012) Lingo, an interactive multimedia programming language and development environment. The functionality of these projections required the same input and output system from the Event Manager/Max interface as the Thought Clouds, but was intended to display a simpler use of the I/O word data. The plan was to have dripping text characters, bearing an unmistakable resemblance to the display of green text characters synonymous with the opening and closing sequences of the film, The Matrix. (Warner Bros, 2012) Word extracts from user input or embodied conversational agent output would appear from within the dripping text. The intention here was to accentuate the robot theme, drawing on popular culture influences including, films, space and encouraging intertextuality within the placeholder projections.
The dripping text projections and a Jamoma vector based amplitude panning spatial audio system were up and running at the opening of the Articulated Head exhibit in the Powerhouse Museum at the beginning of February 2011. The SPAT (“Forumnet,” 2012) distance based amplitudepanning system and the Processing (Processing, 2012) Thought Clouds were developed and implemented over the first few of months of the exhibits display.
Evaluation of the impact and effectiveness of these creative new media additions to the Articulated Head’s interactive environment are detailed in Theme 12: Auditory Visual Interactive Environment. A projected design refinement targeted at similar auditory visual interactive environments and exhibits, based on the findings from the evaluation presented in Theme 12 is imparted in section 9.1.12 The Auditory Visual Interactive Environment. For more details regarding the technical execution of these projects please see the E-Appendix 11: Creative New Media Additions Documentation, which is supplied as an electronic appendix. Section 7 that follows presents the research data and findings.