- Robitron Yahoo Group, 2002 – 2014
The Robitron discussion list is a Yahoo! Group started by Robby Garner in 2002 (hence the name “Robi-tron”), that has not only become the de facto Loebner Prize feedback channel, but also functions as the online “water cooler” for Turing-class chatbot developers. Archives of the Robitron group are only available to group members. The following is a reverse-chronological listing of my own posts to the Robitron group since 2008, to date.
Thanks for your questions. I think your position is not different from many if not most people. It took me a long time for the significance of blogs to sink in. As you have identified, its the feed that’s the distinguishing factor.
It is extremely easy to automate a Twitter account using services like http://twitterfeed.com and http://dlvr.it . You just stick your feed in, and voilà. For instance, its my belief that not enough chatbots are taking advantage of chat log feeds; in fact, the only one I know for sure who was using it was Liz Perreau’s ShakespeareBot (apparently offline at the moment).
Twitter is super easy to mix and match; because, you can just as well manually tweet from the same account that you’ve automated with one or more feeds, so very versatile. In fact, there are myriad services available to further automate and semi-automate tweets and replies in various ways (often for corporate use).
Using a push/pull model, that would cover push, and “datamining” Twitter would then come under pull. There are MANY applications and services available for “datamining” Twitter, and Twitter itself is beginning to buy and try to integrate some of them.
A Twitter account actually consists of two basic feeds: 1) your postings to Twitter, and 2) all the postings of those you’re following. The way “following” works on Twitter allows you to “build” your own highly individual and unique
feed. Technically, Twitter is a microblog network, and so these “one liner” status updates can be thought of as a kind of extension of blogging. Similar to blogging, people may not be “talking” to others, but simply “twittering” or posting their thoughts; therefore, Twitter can also be thought of as a “thought network”. Part of the brilliance of the 140 character SMS-compatible status line is that it conforms more or less to one average sentence.
I’ve gotten to the point now where I’m not actually reading the feed of people I follow on Twitter, but have instead finely tuned my Twitterbots to return only that “datamined” info that conforms most closely to my interest. (I’m actually using http://summify.com to periodically summarize and prioritize the feed of people I follow on Twitter.) I use Facebook for internal, strictly for personal; I know every single friend face to face. Whereas, I use Twitter for external, outwardly facing, and potentially professional contact.
One man’s trash is another man’s treasure
“Turning Twitter into a Massively Multiplayer Turing Test for Social Bots, with bonus use of Mechanical Turk. WIN.“
This is an all too common misunderstanding of Twitter by novices, and worth taking just a minute to try and clarify here. Its becoming clear to me that people who are good at programming AIs, don’t seem to waste a lot of time on social media. Take a champion like Bruce Wilcox for example, I really miss some focussed “lifestreaming”, personal blog, or at least some kind of up to date “homepage”; but obviously, he’s got other priorities.
The key to understanding Twitter is that its NON-LINEAR, unlike conventional discussion fora. Its like the mother of all knowledgebases, or something like one big CHATBOT. Twitter is a combo of ARMIES of Twitterbots churning away, plus a mechanical turk of something on the order of 200 million souls. I do think Robert Medeksza “gets it”, as his http://twitter.com/UltraHal has apparently been LEARNING from Twitter for some time.
Two final points: 1) perhaps the main point of twittering is just good old fashion SEO, 2) Twitter is the Internet’s principal, and infinitely customizable “feed exchange” or “feed interchange” (which is probably why Apple decided to integrate Twitter into their latest operating system upgrade).
Best of luck
That’s cool, but still no way for punters to test drive Chip Vivant?
BTW, is it really ChipVivant or Chip Vivant, one word or two? ;^)
FWIW, twittering is a superb way to get this kind of thing out into the sea of data.
Have you “blogged” recently a complete list (overview) of domains and projects that you’re involved with ?!?
> agent technology, metadata and crowd sourcing
Good day Amanda,
Just a few thoughts to get going.. I recently saw a great AiGameDev.com online interview with Bruce Wilcox about his Chatscript, which more or less covered the same ground as the online article, “Beyond Façade: Pattern Matching for Natural Language Applications” http://bit.ly/fs63c9 .. I really appreciated the clear explanations.. I didn’t know anything about Facade previously.. Anyway, toward the end of the interview Bruce mentioned something like the next level of his development involved figuring out how to analyze books to extract things like personality or personal knowledge, perhaps like converting novels into chatbots.. This would almost certainly need to be done using metadata.. The point of the Facade angle seemed to be getting beyond keywords and phrases into mapping concepts with ontologies, etc.. One of the most common forms of crowdsourcing that comes to mind is reCAPTCHA (developed at CMU).. For example, what if every school child in the US were to enter their interpretations of
novels into some machine readable form? In fact, school kids are now already using Oddcast Voki.com talking avatars to imitate historical figures..
You’re not on Twitter?
Here´s one example of the new wave of voice operated chat agents hitting the marketplace:
Speaktoit Virtual Talking Assistant
Its an Android app by http://www.speaktoit.com , apparently using the undocumented Google speech recognition API.
Of course, what I´d really like to see are such apps where I can plug any chatbot engine into the backend (via XMPP), such as Turin or any Loebner contender, and simply talk to it in the same way.
Something like “Open Chatbot Standards” might allow for further modularity in the form of an infinite array of avatar variations, not to mention freely pluggable voices, accents and recognition APIs.
I´ve been mostly offline the past four months traveling in South America, and am still in Buenos Aires today. I´m thinking about standards for chatbot commercialization, and wanted to let this brainstorm fly for any potential feedback on this subject.
Currently, I´m seeing three levels of products:
1) Intelligence (chatbot engines)
2) Avatars systems
3) Interactive speech technologies (TTS + STT)
My belief continues to be that XMPP is the lingua franca for chatbots, allowing them to communicate with other networked systems, including avatar systems, and indeed one another.
I don´t know of many “modular” turnkey avatar systems yet. I suppose there is SitePal, and various SecondLife products could be considered similar. I understand Zabaware is working on something like this. I´m assuming some level of lipsync built into avatar systems, or not.
Of course, the speech technologies are in great flux right now, particularly regarding web service APIs. How they may shake out is anyone´s guess at this point, but I can imagaine XMPP also being involved at this level, potentially for interfacing modularized avatar systems.
Somehow this notion of a standard “modularity” in this area seems to open a lot of ground for the participation of an even wider array of industries in the overall effort.
(I won´t even get into the potential convenience of XMPP for interfacing “intelligences” with future hardbots, or physical robots, at this point.)
That´s it for now! ;^) All feedback, both positive and negative, much appreciated!
Just a quick follow-up to point Robitroners to my latest blog post attempting to reverse engineer IBM Watson ..
Marcus L Endicott: How Many PlayStations Make A Watson?
Free Watson – IBM DeepQA test subject denied basic human rights
> [ http://twitter.com/statuses/user_timeline/226793352.rss ]
I’ve got an intelligent retweeter online above for those who would like to follow the buzz.
> [ http://twiterlist2rss.appspot.com/mendicott/lists/chatbotters/statuses.rss ]
It is part of my Twitter List above if you want to follow the broader community.
Seasons greetings from snowy Colorado
Both you and Hamilton may wish to look at the “China Brain Project” being developed by Hugo de Garis and Ben Goertzel at Xiamen University Artificial Brain Lab http://ai.xmu.edu.cn/artificialbrain ..
Video ~ The China-Brain Project: Report on the First Six Months
PDF ~ The China-Brain Project: Report on the First Six Months
Então, é bom ter ligação Brasileiro aqui na lista.. ;^)
After a cursory scan about semantic primes (aka semantic primitives), a number of things spring to mind..
1) pattern-matching AI
2) IVR grammars
3) conlangs or constructed languages
Semantic primitives may be the semantic equivalents of word “stemming”, perhaps a kind of “semantic stemming”..
Like the utility of stemming in search, this “semantic stemming” might be employed in the semantic enhancement of pattern matching in AI..
Lately, I have been struck by the similarity of IVR grammars to basic AI pattern matching.. It seems to me there may be room for a much closer hybrid of the two..
The similarities between these various reductions and the constructed languages seem inescapable, which leads me to wonder what role the conlangs might play in aid of the semantization of AI..
Thanks Hugh, this lead me (via citation) to the interesting work John Barnden has done with the E-Drama Project using WordNet to look into affect detection by metaphor in AI actors:
>>The Affective Norms for English Words (ANEW) provides a set of normative emotional ratings for a large number of words in the English language. This set of verbal materials have been rated in terms of pleasure, arousal, and dominance to complement the existing International Affective Picture System (IAPS, Lang, Bradley, & Cuthbert, 1999) and International Affective Digitized Sounds (IADS; Bradley & Lang, 1999), which are collections of picture and sound stimuli, respectively, that also include these affective ratings. The ANEW is being developed and distributed by the NIMH Center for Emotion and Attention (CSEA) at the University of Florida.<<
How far do you consider valence annotation from “sentiment analysis”?
I’ve been watching IBM’s forray into sentiment analysis with their recent purchase of SPSS.
Doesn’t FrameNet (http://framenet.icsi.berkeley.edu/) include valences?
I have arrived at the conclusion that IM-XMPP/Jabber will become the universal transport mechanism for conversational agents..
Therefore, I am searching for a bidirectional IM-Voice gateway of any kind.. in order to make *ALL* AI chatbots fully voice-interactive..
I am also searching for a Windows7-compatible desktop avatar (talking head) frontend, which can easily accept *ANY* IM-XMPP/Jabber backend..
Any pointers or suggestions would be most appreciated!
Here’s what WolframAlpha answered when asked “how are you doing?” => http://twitpic.com/1pkb3p/full
And, here’s a video demo of Siri => http://www.youtube.com/watch?v=dIWbbotLVds
There are 468 members on this Robitron YahooGroup, with maybe a dozen regular posters and another dozen periodic posters, which says to me that a lot of folks are paying close attention to what goes down in this group.
I have seen ample evidence in the popular literature that blackhat chatbots have talked plenty of people out of their personal details, not to mention sexbots talking their way into people’s private lives.
Conversational agents are a form of search, certainly in the case of pattern-matching AI. Search and conversational agents are rapidly moving toward convergence, just look at WolframAlpha, widely considered a hot forerunner in the new wave of semantic search.
What is even bigger, the conversational interface has been predicted to become the next BIG technological disruption.
Hello? Who here is going to tell Apple that Siri is not $200 million worth of credible?
Have you looked at Google App Engine (http://appengine.google.com/)?
It’s all about Python and Java..
Only issue seems to be their “BigTable” non-relational database..
Helio Perroni Filho has told me that his Chatterbean Java AIML interpreter could be used on AppEngine with some modifications..
I haven’t heard of anyone yet attempting to use AIML with the AppEngine flat file database..
(If anyone could provide a concise critique of the difference between relational database and flat file database within the context of pattern matching AI, I would certainly welcome it..)
Thanks to David Levy’s win.. I’m now happily following Huma Shah on Twitter at http://twitter.com/Turing100in2012
I’ve been following Dr Wallace at http://twitter.com/drwallace
8pla.net is there at http://twitter.com/8planet
Even Robby Garner is on Twitter at http://twitter.com/robitron
I would love to follow other Robitroners on Twitter.. Perhaps others could respond to this message with their Twitter link, making a defacto list of Robitron Twitterers..
BTW, I’ve got a little hack at http://twitter.com/robitron_list which alerts me to *new topics* on this Robitron group, without links.. Other Twitter power users are welcome to follow it too..
FYI, as far as the “Twuring test” is concerned, I’ve now got two different chatbots on Twitter, a Pandorabot at http://twitter.com/twaveladvisor AND a Conversive Verbot at http://twitter.com/vagabot
I’m parsing travel questions off the top of Twitter and sending the same stream into both bots.. Importantly, I’m replacing all @ signs with # hash in order to prevent annoying people by sending replies into their Twitter inboxes..
And, I’m still Twittering heavily myself about #chatbots and the coming #VoiceWeb at http://twitter.com/mendicott
Cheers to all, and especially Daivd Levy for his success, from tropical Queensland!
This is detailed in my blog post “Corpus linguistics & Concgramming in Verbots and Pandorabots” at http://tinyurl.com/69xw9t . Pay particular attention to the comments following the post. The missing piece was a custom process done in SPSS by a statistical programmer friend of mine as a personal favor. For more info on concgramming, I suggest tracking down a copy of “From n-gram to skipgram to concgram” http://tinyurl.com/4hl3ow . I have been in contact with one of the authors, Chris Greaves, and just asked him for an explanation of the differences between concgramming and latent semantic analysis/latent semantic indexing.
BTW, I’ve just Twittered about shakespearebot.com, and love your RSS out feature; can you point me to other bots doing this?
Esteemed Robitron members 🙂
I’ve been following Robitron for some months now, and would like to introduce myself. Some of you are already familiar with my work via the pandorabots-general list. (I like the pandorabots-general group; because, its low pressure and handles a lot of clueless questions gracefully.) So far, I’ve actually been more involved with Conversive Verbots than Pandorabots, mainly because of the ease (and cost) of the integrated graphical interface. However, I am in the process of moving to Program-E.
I am not interested in even trying to pass the Turing test, however do appreciate the Loebner Prize stimulus to development, along the lines of the Paris Dakar Rally. I am currently resonating with John Smart’s expression of the “Conversational Interface” or “Conversational User Interface (CUI)”. I simply want a conversational agent that works in
a practical way along one vertical, in my case “green travel”.
I want my bot to contain all the knowledge in my book, Vagabond Globetrotting 3, and to acquire knowledge from all the web feeds underlying my www.meta-guide.com site, or in other words, to be able to “read” books and “learn” from RSS feeds…. I have already converted my book into AIML, and am currently working on a model to convert from RSS into AIML via semantic technologies; however, I would actually prefer not to reinvent the wheel, and use off the shelf
It seems to me at this point the real bottleneck is with voice input. I don’t know of any web site that actually accepts speech recognition through the web. It’s not clear to me why I couldn’t simply talk to a web site via VoIP, Skype for instance. I have played around with Windows speech recognition, which interfaces well with desktop Verbots. In this case, I’m looking more for cloud solutions.
Lately, I’ve been heavily twittering about the convergence of chatbots with semantic technologies at http://twitter.com/mendicott . I’ve only actually known about Twitter for some months and am finding it intriguing, now referring to it as a “thought network”, very neuronal. I can even imagine a chatbot knowledgebase fed with Twitter feeds; the 140 character limit seems perfect for bot responses….
Good day from Sydney