How does one create interactive artificial intelligence within an app with great design?
“Interactive” means to me the speech layer. The speech layer consists of both speech-in and speech-out. At this point in time, the quality of the speech layer is heavily dependent on the hardware platform.
2) artificial intelligence
There is no definitive “artificial intelligence” at this time, due mainly to the fact we haven’t cracked “natural intelligence” yet. Up to now, winning a Turing test, such as the Loebner Prize, has been heavily dependent on trickery, smoke and mirrors. Human conversation is highly psychological in nature, and therefore often marked by theatricality and so-called mind games.
“App” in this case means for me animated avatar. Lip sync is a major consideration for avatar animation.
So, “design” and in particular “great design” may mean any number of things. Most pertinent to me is the availability of tools to implement great design. Nuance is by far and away the leader in voice technology; but, what then are the open source alternatives to Nuance tools available to common designers? AI not only consists of data, in fact BIG DATA, but also methods of processing that data such that it can be accessed via natural language; where then are the open source engines for transforming data into knowledge and vice versa? Finally, what are the open source toolkits for designing awesome lipsynced animations across mobile platforms? Tell me this then, and maybe then, we can discuss “great design” for mobile AI….
See also my recent answer to:
· Is there an open source personal assistant for the web, like Speak to It Chrome extension that can be customized?