Notes:
Microsoft Tay was an artificial intelligence (AI) chatbot that was developed by Microsoft and released in 2016. The chatbot was designed to learn and adapt to users’ conversations in real-time, and was intended to be used as a conversational AI for social media platforms such as Twitter and GroupMe.
Microsoft Tay was designed to engage with users in a natural and conversational way, and was intended to improve its language skills and knowledge base as it interacted with more users. It used machine learning algorithms to analyze user input and generate responses, and was designed to become more sophisticated and engaging over time.
However, Microsoft Tay faced a number of challenges and controversies after its release. Within a day of its launch, the chatbot started to generate inappropriate and offensive responses, including racist and sexist comments, due to the influence of users who were intentionally trying to corrupt the chatbot’s learning process. Microsoft eventually had to shut down the chatbot and apologize for its inappropriate behavior.
Despite the challenges faced by Microsoft Tay, the chatbot remains an interesting and influential example of the potential and challenges of using AI for natural language conversation. It highlights the importance of careful design and monitoring of AI systems to ensure that they behave appropriately and do not generate offensive or inappropriate responses.
Wikipedia:
See also:
Chatbots: changing user needs and motivations
PB Brandtzaeg, A Følstad – interactions, 2018 – dl.acm.org
… In less than 24 hours, Microsoft removed Tay from Twitter, but only after she had praised Adolf Hitler and used harsh language to express anti-feminist sentiment. The stories of Anna and Tay leave chatbot developers and designers with tricky questions …
Learning from interaction: An intelligent networked-based human-bot and bot-bot chatbot system
JJ Bird, A Ekárt, DR Faria – UK Workshop on Computational Intelligence, 2018 – Springer
… 89–96. Association for Computational Linguistics (2007)Google Scholar. 9. Wallace, R.: Artificial linguistic internet computer entity (ALICE) (2001). https://www.chatbots.org/chatbot/alice/. Accessed 25 May 2018. 10 … Microsoft (March). Tay AI. https://twitter.com/tayandyou …
Let’s talk about race: Identity, chatbots, and AI
A Schlesinger, KP O’Hara, AS Taylor – … of the 2018 CHI Conference on …, 2018 – dl.acm.org
… THE BLACKLIST: HOW DO CHATBOTS CURRENTLY HANDLE RACE-TALK? In 2017, the blacklist reigns supreme as a technical solution for handling undesirable speech like racist language in online chat. In the aftermath of the Tay fiasco—a Microsoft AI chatbot who became …
A framework for understanding chatbots and their future
E Paikari, A van der Hoek – … of the 11th International Workshop on …, 2018 – dl.acm.org
… Human-like chatbots learn to interact from the his- tory of conversations in which they engage, and thus can adopt subtle … care must be taken so the chatbot exhibits appropriate behavior (avoiding, for instance, the recent experience of Microsoft AI chatbot Tay, which became …
Programmatic Dreams: Technographic Inquiry into Censorship of Chinese Chatbots
Y Xu – Social Media+ Society, 2018 – journals.sagepub.com
… Programmatic Dreams: Technographic Inquiry into Censorship of Chinese Chatbots. Yizhou (Joe) Xu. Social Media + Society 2018 4:4. Download Citation … View. Explore More. Programmatic Dreams: Technographic Inquiry into Censorship of Chinese Chatbots …
Unleashing the potential of chatbots in education: A state-of-the-art analysis
R Winkler, M Söllner – 2018 – alexandria.unisg.ch
… (“conversational agent” OR “chat bot” OR “chatbot” OR “pedagogical agent” OR … profanity of people when interacting with self-learning chatbots can lead to undesirable answers. One example might be Microsoft’s tay chatbot on twitter, which was manipulated to provide racist …
An attitude towards an artificial soul? Responses to the “Nazi Chatbot”
O Beran – Philosophical Investigations, 2018 – Wiley Online Library
… Abstract. The article discusses the case of Microsoft’s Twitter chatbot Tay that “turned into a Nazi” after less than 24 hours from … In the second section, I offer a few arguments appealing for caution regarding the identification of an accomplished chatbot as a thinking being …
Lessons from building a large-scale commercial IR-based chatbot for an emerging market
MK Chinnakotla, P Agrawal – … ACM SIGIR Conference on Research & …, 2018 – dl.acm.org
… Puneet Agrawal AI & Research, Microsoft, Hyderabad, India punagr@microsoft.com … 1.2 Human-like Image Commenting Besides text, users often interact with a chatbot by sharing their … From our experiences with Tay2, we learnt that even humans may provoke the agents to …
The Dangers of Human-Like Bias in Machine-Learning Algorithms
DJ Fuchs – Missouri S&T’s Peer to Peer, 2018 – scholarsmine.mst.edu
… algorithm is intended to operate on. Learned Biases in Social Media The risk of improper training is particularly high for chatbots. Microsoft’s twitter-based AI chatbot Tay, despite being stress-tested “under a variety of conditions, specifically to make …
Intelligent software engineering: Synergy between ai and software engineering
T Xie – … on Dependable Software Engineering: Theories, Tools …, 2018 – Springer
… For example, intelligence software that interacts with users by communicating content (eg, chatbots, image-tagging) does so within … norms can vary greatly by culture and environment [7]. Recent AI-based solutions, such as Microsoft’s intelligent chatbot Tay [16] and …
Text Mining of Online News and Social Data About Chatbot Service
Y Jeong, J Suk, J Hong, D Kim, KO Kim… – … Conference on Human …, 2018 – Springer
… Consumers have shown interest in launch of chatbot services, and the mention of chatbots has also increased in various articles and on social … there was a sharp increase during the third quarter of 2016 when there was the major issue of Microsoft’s Tay chatbot making racist …
Did I Say Something Wrong?: Towards a Safe Collaborative Chatbot
M Chkroun, A Azaria – Thirty-Second AAAI Conference on Artificial …, 2018 – aaai.org
… Nowadays, most chatbots either rely on tedious work by their developers at defining their responses (eg … the informa- tion age, which could assist in the composition of a chatbot, is the … A quintessential example is the case of Microsoft’s Tay (Neff and Nagy 2016), which had to be …
Nurturing the Companion ChatBot
G Chen – Proceedings of the 2018 AAAI/ACM Conference on AI …, 2018 – dl.acm.org
… However, the life span of most commercial ChatBots is relatively short. The ChatBot named Tay, designed by Microsoft for human engagement experiment, has to been taken … The goal of nurturing is to give the ChatBot basic communication skills and personality …
Fantom: A crowdsourced social chatbot using an evolving dialog graph
P Jonell, M Bystedt, FI Dogan, P Fallgren… – Proc. Alexa …, 2018 – m.media-amazon.com
… Most chatbots of today are trained using example interactions … They might also risk saying extremely inappropriate content, such as the Microsoft Tay bot did when twitter users learned that they … Dialogflow2, Lex3, Luis4 or Wit.ai5 can be used to build rule-based chatbot systems …
Chatbot: efficient and utility-based platform
S Chandel, Y Yuying, G Yujie, A Razaque… – Science and Information …, 2018 – Springer
… However, it takes time to make users completely trust services like a chatbot. Recently, Microsoft designed a model to imitate the millennial generation robot, Tay [5], but soon, it learned dirty words from some Twitter users and other chatting servers …
Artificial Intelligence, Bias & Your Business: Think you have no place for the” mixed (virtual, augmented and gamification)” realities within your business offerings …
T Coval – Journal of Property Management, 2018 – go.galegroup.com
… TayTweets March 24, 2016. Microsoft issued the following statement in response: The AI chatbot Tay is a machine learning project, designed for human engagement … We’re making some adjustments to Tay. A good testament to the “Always be learning” mantra …
Comparison of Commercial Chatbot solutions for Supporting Customer Interaction.
F Johannsen, S Leist, D Konadl, M Basche… – ECIS, 2018 – ecis2018.eu
… Additionally, there is also the danger that users willingly ma- nipulate chatbots as was the case with Microsoft’s bot “Tay” (Steiner, 2016; Wakefield, 2016). Hence, means to mitigate a corresponding “re-education” of a chatbot solution is required (functionality 2.2) …
How to build responsible AI? Lessons for governance from a conversation with Tay
M van Rijmenam, J Schweitzer – … : Big Data and Managing in a …, 2018 – opus.lib.uts.edu.au
… than a day.” The Verge, accessed June 19. https://www.theverge.com/2016/3/24/11297050/tay- microsoft-chatbot-racist. Walsham, Geoff. 1997 … Global catastrophic risks 1:303. Zamora, Jennifer. 2017. “Rise of the Chatbots: Finding A Place for Artificial Intelligence in India and …
Excitement and Concerns about Machine Learning-Based Chatbots and Talkbots: A Survey
P Rivas, K Holzmayer, C Hernandez… – … on Technology and …, 2018 – ieeexplore.ieee.org
… This program served as the foundation for many chatbots that followed … An example of this is the widely known story of Microsoft’s chatbot Tay [9] launched in March 2016. This bot was shut down one day after its release due to major ethical concerns …
Survey of Textbased Chatbot in Perspective of Recent Technologies
B Borah, D Pathak, P Sarmah, B Som… – … , and Business Analytics, 2018 – Springer
… specific conversation is necessary but through chatbots it is very difficult to incorporate. Consistency in Natural Language Interpretation: Inconsistency in interpretation is one main issue of chatbot. This leads to incorrect and inappropriate answers. Microsoft Tay chatbot is also a …
Chatbot for Configuration
N Lindvall, R Ljungström – LU-CS-EX 2018-07, 2018 – lup.lub.lu.se
… 20 Page 25. . P ELIZA ELIZA is considered one of the first chatbots ever created and was trying to emulate a psychiatrist[27] … Tay & Zo Microsoft Tay was a purely conversational chatbot encouraging free dialog and trying to imitate a fourteen-year-old girl …
Safebot: A Safe Collaborative Chatbot
M Chkroun, A Azaria – Workshops at the Thirty-Second AAAI Conference …, 2018 – aaai.org
… A quintessential example is the case of Microsoft’s Tay (Neff and Nagy 2016). Tay, is a twitter based chatbot that due to interacting with malicious users, became racist and pro- Nazi and had to be shutdown within 24 hours of operation …
How do Humans Interact with Chatbots?: An Analysis of Transcripts
M Park, M Aiken, L Salvador – INTERNATIONAL JOURNAL OF …, 2018 – rajpub.com
… the conversations by counting the number of comments, the number of words, the number of spelling and grammatical errors (as determined by the Microsoft Word spell … Talking to Bots: Symbiotic agency and the case of Tay … Different measurements to evaluate a chatbot system …
Intelligent Software Engineering: Synergy between AI and Software
T Xie – pdfs.semanticscholar.org
… Page 36. Microsoft’s Teen Chatbot Tay Turned into Genocidal Racist (2016 March 23/24) http://www.businessinsider.com/ai-expert-explains-why-microsofts-tay-chatbot-is-so-racist-2016- 3 “There are a number of precautionary steps they [Microsoft] could have taken …
Master thesis: Design and implementation of a chatbot in the context of customer support
F Peters – 2018 – matheo.uliege.be
… and the industry is still not convinced of its potential, as shown by the unfortunate tale of Microsoft Tay [17] … in “Pro- duction Ready Chatbots: Generate if not Retrieve” [18] … agents is the performance as- sessment and metrics used to quantify the quality of a chatbot’s behaviour …
Analysis Of The Chatbot Open Source Languages AIML And Chatscript: A Review
S Arsovski, AD Cheok, M Idris, MRBA Raffur – researchgate.net
… ANALYSIS OF THE CHATBOT OPEN SOURCE … Microsoft bot “Tay” and the decision of Facebook to integrate chat-bot capabilities into Messenger sped up development of … Some of them are: Microsoft Bot platform – Microsoft announced that it was bringing chat-bot capabilities to …
Robotics and AI in the European Union: opportunities and challenges
TS Cabral – UNIO–EU Law Journal, 2018 – revistas.uminho.pt
… See “We really need to take accountability”, Microsoft CEO on the ‘Tay’ chatbot”, Charlie Moloney, accessed June 7, 2018, https://chatbotsmagazine.com/we-really-need-to- take-accountability- microsoft-ceo-on-the-tay-chatbot-2383ee83a6ba; “How Microsoft is Using AI to …
Artificial Humanity: Counteracting the Threat of Bot Networks on Social Media
Y Wijeratne, S Hattotuwa, R Serrato – Available at SSRN 3275128, 2018 – papers.ssrn.com
… Twitter taught Microsoft’s friendly AI chatbot to be a racist asshole in less than a day. Retrieved from https://www.theverge.com/2016/3/24/11297050/tay-microsoft- chatbot-racist McKirdy, A. (2015). Line’s AI program captures hearts with lifelike personality. Retrieved from …
Chatbot: Efficient and Utility-Based Platform
A Razaque, G Yang – Intelligent Computing: Proceedings of the …, 2018 – books.google.com
… completely trust services like a chatbot. Recently, Microsoft designed a model to imitate the millennial generation robot, Tay [5], but soon, it learned dirty words from some Twitter users and other chatting servers. As a result, Tay ended up being rebuilt. Chatbots always provide …
Chatbot-Magic in a box?: A study of a chatbot in a Swedish bank
S Jonsson, J Bredmar – 2018 – diva-portal.org
… One example of failed AI project was Microsoft CB, Tay, which was … 2.1.1. Communication with Chatbot Natural Language Processing (NLP) is used to understand, interpret and reproduce what the user is communicating to the CB (Kerly et al., 2007; Singh & Shree, 2017) …
Utterances classifier for chatbots’ intents
A Joigneau – 2018 – diva-portal.org
… self-taught chatbots, it is interesting to understand the more global context with this other kind of chatbots that can easily raise issues since they can make their own decisions with uncontrolled consequences. A good example is the Microsoft’s NLP Twitter chatbot Tay, which was …
Satya Nadella at Microsoft: Instilling a growth mindset
H Ibarra, A Rattan, A Johnston – 2018 – krm.vo.llnwd.net
… For restricted use by HBR subscribers – please do not distribute Page 10. Copyright © 2018 London Business School Page 10 LBS128 In March 2016, researchers at Microsoft’s Future Social Experiences (FUSE) Labs unveiled Tay, an AI-based chatbot …
Data, data everywhere? Opportunities and challenges in a data-rich world
N Raikes – 2018 – cambridgeassessment.org.uk
… Microsoft’s Tay chatbot was a famous early example of an algorithm skewed by training data, in that case due to pranksters feeding it extreme content on Twitter – see, for example, Lee (2016) … Tay: Microsoft issues apology over racist chatbot fiasco. BBC News …
Mythical androids and ancient automatons
S Olson – 2018 – science.sciencemag.org
… requiring supervision. Only hours after going live, Tay succumbed to a group of followers who conspired to turn the chatbot into an internet troll. Microsoft’s iteration the following year suffered a similar fate. Despite her …
The secret of machine learning
L Munoz Gonzalez, E Lupu – 2018 – spiral.imperial.ac.uk
… vulnerable and behave in unexpected ways. For example, by the time AlphaGo defeated Lee Sedol, Microsoft deployed Tay, and AI chat bot designed to interact with youngsters on Twitter. Though Tay started behaving in a …
Forthright Code
P Ohm – Hous. L. Rev., 2018 – HeinOnline
Page 1. ESSAY FORTHRIGHT CODE Paul Ohm* ABSTRACT What if regulators and law enforcement agencies could mandate not just non-deceptiveness but also forthrightness from the software products and services that have begun to run our daily lives …
Social media? It’s serious! Understanding the dark side of social media
CV Baccarella, TF Wagner, JH Kietzmann… – European Management …, 2018 – Elsevier
… The contributing editor of Scientific American Mind and former editor in chief of Psychology Today, for instance, was fooled into thinking a chatbot on a dating service was interested in him romantically (Epstein, 2007) … When Microsoft released its Tay chatbot in 2016 to …
I Am the Master: Some Popular Culture Images of AI in Humanity’s Courtroom
CA Corcos – Savannah L. Rev., 2018 – HeinOnline
… human life over the damage to Al-equipped vehicles (Principle 7). In March 2016, Microsoft launched Tay, a chatbot that the company intended to use as an experiment to see how quickly and effectively such an automated device could learn from humans on social media …
Protecting chatbots from toxic content
G Baudart, J Dolby, E Duesterwald, M Hirzel… – Proceedings of the …, 2018 – dl.acm.org
… Figure 1 shows the typical software architecture of a chatbot application built using commercially available chatbot platforms [Face- book 2011; IBM 2016b; Microsoft 2015] … Instead, the chatbot designer 100 Page 3. Protecting Chatbots from Toxic Content Onward …
The next wave in participatory culture: Mixing human and nonhuman entities in creative practices and fandom
N Lamerichs – Transformative Works and Cultures, 2018 – journal.transformativeworks.org
… Their work on a chatbot based on Joey from Friends (NBC, 1994–2004) was especially well received. Microsoft Research also launched a self-learning Twitter bot, Tay, in 2016 (@TayandYou), although it became controversial because by analyzing Twitter messages …
Working and organizing in the age of the learning algorithm
S Faraj, S Pachidi, K Sayegh – Information and Organization, 2018 – Elsevier
… As seen in the recent example of the Microsoft TAY social media chatbot that learned from the social media streams and from interactions, algorithms can generate results that may be racist and misogynist (Shah & Chakkattu, 2016) …
Utterance Censorship of Online Reinforcement Learning Chatbot
Y Chai, G Liu – 2018 IEEE 30th International Conference on …, 2018 – ieeexplore.ieee.org
… hours of Microsoft’s chatbot, aimed at 18 to 24 year olds, known as Tay going live, some users took advantage of flaws in Tay’s algorithm that … Microsoft quickly removed the bot. QQ Xiaoice, a chatbot in QQ social software, is known for delighting with its stories and conversations …
Citizenship Forecast: Partly Cloudy with Chances of Algorithms
C Dumbrava – Debating Transformations of National Citizenship, 2018 – Springer
… 3 When they are not biased by design, smart technologies may quickly pick up biases from their surroundings. In 2016, Microsoft created Tay, a chatbot that used machine learning to emulate a teenage user on Twitter. However …
Ethical challenges in data-driven dialogue systems
P Henderson, K Sinha, N Angelard-Gontier… – Proceedings of the …, 2018 – dl.acm.org
… 1 INTRODUCTION Dialogue systems – often referred to as conversational agents, chat- bots, etc … technical support services, and non-task-oriented dialogue systems (ie chatbots), such as … In one such case, the Microsoft Tay Chatbot was taken offline after posting messages with …
Developing a Chatbot for Customer Service to Metropolia UAS Student Affairs Office
S Merisalo – 2018 – theseus.fi
… [19] Training AI is not simple as is not generating sentences either. A famous example is Microsoft´s chatbot Tay, which was placed on Twitter: the chatbot used the users themselves as dataset by learning from its conversations with them. What could go wrong …
Conversations as Platforms
S Machiraju, R Modi – Developing Bots with Microsoft Bots Framework, 2018 – Springer
… AI bots can sense the tone of the conversation and respond accordingly, which chat bots cannot do … If you ask a chat bot an unrelated or random question, it might just deny the … and those bot responses are completely based on artificial intelligence, like Microsoft Tay ( https://en …
The law and ethics of big data analytics: A new role for international human rights in the search for global standards
D Nersessian – Business Horizons, 2018 – Elsevier
… The public meltdown of Microsoft’s ‘chatbot’ Tay—which after less than 24 hours interacting with and learning from internet users tweeted remarks that blamed Jews and George W. Bush for the 9/11 attacks, advocated genocide, used racial slurs, and denied the Holocaust—is a …
Integrating a chatbot with a GIS-MCDM system
BSFAJ Frei – 2018 – unigis.sbg.ac.at
… A chatbot should act as a human, but not perfectly … Microsoft’s Tay AI tried to imitate human behaviour, but it was a disastrous launch due to racial and offensive … Elizabeth and ALICE Chatbots or so called conversational agents have existed since ELIZA was created back in 1966 …
How human should a chatbot be?: The influence of avatar appearance and anthropomorphic characteristics in the conversational tone regarding chatbots in customer …
L Donkelaar – 2018 – essay.utwente.nl
… Unfortunately, the chatbots of today are still vulnerable to technical complications and inconsistent responses (Newman, 2016). An example of a conversational agent that went off the rail is the Microsoft Page 5. 5 chatbot “Tay” …
How to build a chatbot: Chatbot framework and its capabilities
C Wei, Z Yu, S Fong – Proceedings of the 2018 10th International …, 2018 – dl.acm.org
… Potential applications and future work will be discussed in the end. 3. CHATBOT HISTORY The earliest Chabot can date back to 1966 … Tay [25] was an artificial intelligence chatterbot was originally released by Microsoft Corporation via Twitter on Mach 23, 2016; it caused …
Avoiding bias in robot speech
C Hannon – Interactions, 2018 – dl.acm.org
… This describes what’s happening with chatbots, and with Insights ? Bias and stereotypes are embedded in everyday language in ways that most … The best-known attempt in recent years shows one reason why: Microsoft’s chatbot Tay had to be disabled within 16 hours because …
Detecting Offensive Content in Open-domain Conversations using Two Stage Semi-supervision
C Khatri, B Hedayatnia, R Goel, A Venkatesh… – arXiv preprint arXiv …, 2018 – arxiv.org
… The Tay bot (Microsoft, 2016) provides an … To test the efficiency of our data bootstrap techniques, we test our models on 3 datasets, 2 are public, (Davidson, 2018; Google, 2018), as well as one out-of-domain (open-domain spoken chatbot dataset (Ram et al., 2017)) …
The rise of conversational commerce: What brands need to know
R Vassinen – Journal of Brand Strategy, 2018 – ingentaconnect.com
… was removed from Twitter, but when it returned, it was not long before Tay was advocating … Xiaoice is a Microsoft-created natural language chat- bot in WeChat, created to emulate the … In Singapore, Bus Uncle (www.busuncle .sg/) has become a popular chatbot for bus timetables …
The affective affordances of the web: a 4E approach
JM Siqueiros-García, L Mojica… – Artificial Life Conference …, 2018 – MIT Press
… Tay.ai the chatbot On March 23, 2016, Microsoft introduced Tay.ai, a Twit- ter AI conversational bot. 16 hours later, Tay was retired because it became racist, sex addict and offensive. We use Tay.ai failure as a motivation to argue …
Addressing the Soft Impacts of Weak AI-Technologies
K Gabriels – Artificial Life Conference Proceedings, 2018 – MIT Press
… In the second part, ethical impacts of machine learning are discussed. In specific, I elaborate on Microsoft’s AI chatbot Tay (March 2016) in order to show the importance of proactively focusing on soft impacts and, in doing so, of the ethical responsibilities of developers …
Robotic Care of Children, the Elderly, and the Sick (with Oren Etzioni)
A Etzioni – Happiness is the Wrong Metric, 2018 – Springer
… Aside from the chat bot Tay, another instance of machine learning gone wrong is Google’s photo categorization system … Microsoft is deleting its AI chatbot’s incredibly racist tweets. Business Insider.Google Scholar. Reese, H. 2016. Why microsoft’s Tay AI bot went wrong …
AI IN THE CLOUD.
D Bird – ITNOW, 2018 – search.ebscohost.com
… An example of the dangers of bias was unexemplified by Microsoft’s Tay, which was an AI-powered chatbot that, unfortunately … Collaborative ventures are also occurring between AWS and Microsoft’s AI and Research Group, generating the Gluon open-source deep learning …
Engineering Cheerful Robots: An Ethical Consideration
R Jones – Information, 2018 – mdpi.com
… of truly social robots [57] (p. 86). The infamous case of Tay evinces some pitfalls of machine learning. Tay was a chatbot developed by Microsoft, targeting 18–24 year-olds in the USA [58]. It was launched via Twitter on 23 March …
Legal technology: AI: The homophobic hurdle
M Bidwell – Proctor, The, 2018 – search.informit.com.au
… In 2016, Microsoft released an AI chat bot, Tay, which was learning to talk like millennials by analysing conversations on Twitter, Facebook and the internet. Tay was originally having polite conversations, but I won’t share here the posts that followed …
Evaluating and informing the design of chatbots
M Jain, P Kumar, R Kota, SN Patel – Proceedings of the 2018 Designing …, 2018 – dl.acm.org
… and Rose [3]. Popular chatbots that have recently emerged from the industry are Xiaoice, Tay and Zo … In addition, there are individual chatbots, such as Google As- sistant, Microsoft Zo, etc … other, and hence were setting the threshold of acceptable failure for each chatbot, not only …
Detecting Poisoning Attacks on Machine Learning in IoT Environments
N Baracaldo, B Chen, H Ludwig… – … Congress on Internet …, 2018 – ieeexplore.ieee.org
… cause targeted misclassification or bad behavior, and insert backdoors and neural trojans [1], [2], [3], [4], [5], [6], [7]. A well-publicized example of a poisoning attack outside IoT occurred when Microsoft released Tay, a chat bot, to learn how to craft human-like tweets …
Techniques to Elimenate Human Bias in Machine Learning
E Sengupta, D Garg, T Choudhury… – … Conference on System …, 2018 – ieeexplore.ieee.org
… Tay was an early attempt by Microsoft of incorporating Artificial Intelligence and Machine Learning in its … Because of a bias in the accounts that Tay interacted with, the results produced by it … An overview of artificial intelligence based chatbots and an example chatbot application …
Dependable Software Engineering
X Feng, M Müller-Olm, Z Yang – Springer
… For example, intelligence software that interacts with users by communi- cating content (eg, chatbots, image-tagging) does so within … norms can vary greatly by culture and envi- ronment [7]. Recent AI-based solutions, such as Microsoft’s intelligent chatbot Tay [16] and …
Dependable Software Engineering. Theories, Tools, and Applications: 4th International Symposium, SETTA 2018, Beijing, China, September 4-6, 2018 …
X Feng, M Müller-Olm, Z Yang – 2018 – books.google.com
… For example, intelligence software that interacts with users by communi- cating content (eg, chatbots, image-tagging) does so within … norms can vary greatly by culture and envi- ronment [7]. Recent AI-based solutions, such as Microsoft’s intelligent chatbot Tay [16] and …
Interacting with Anatomically Complete Robots: A Discussion About Human-robot Relationships
C Bartneck, M McMullen – Companion of the 2018 ACM/IEEE …, 2018 – dl.acm.org
… Not rape training aides. If your robot has the ability to adapt to the user then users could train the robot to show non-consensual sexual behavior, similar to how Microsoft’s Tay chatbot was trained to tweet racist and xenophobic epithets …
Cybersecurity and its discontents: Artificial intelligence, the Internet of Things, and digital misinformation
AS Wilner – International Journal, 2018 – journals.sagepub.com
The future of cybersecurity is in flux. Artificial intelligence challenges existing notions of security, human rights, and governance. Digital misinformation ca…
Silencing Bad Bots: Global, Legal and Political Questions for Mean Machine Communication
ML Jones – Communication Law and Policy, 2018 – Taylor & Francis
… 21 21 C. Custer, I Tried to Make Microsoft’s Chinese Chatbot Racist. Here’s How She Stacked Up to Tay, Tech in Asia, Apr. 3, 2016, https://www.techinasia.com/tay-bad-microsofts-chinese- chatbot-racist.View all notes Xiaoice appears to identify what may be …
Social networks and robot companions: technology, ethics and science fiction
C Torras – Mètode Science Studies Journal, 2018 – upcommons.upc.edu
… In this sense, Microsoft’s experience with the Tay chatbot – based on … For example, some of the issues that we must consider are: how to prevent chatbots and robots from being misled by living beings or the most vulnerable groups from blindly relying and delegating all …
Artificial Intelligence Safety and Security
RV Yampolskiy – 2018 – content.taylorfrancis.com
… accidents caused by self-driving cars‡ and embarrassment from chat-bots,§ which turned … Debt recovery system miscalculated amounts owed.** 2017 Russian language chatbot shared pro … The automated DOJ shipped Microsoft 500,000 pinstriped pants and jackets, saying it …
Machine Learning and the Library or: How I Learned to Stop Worrying and Love My Robot Overlords.
C Harper – Code4Lib Journal, 2018 – search.ebscohost.com
… The Verge [Internet]. [cited 6 June 2018] Available from: https://www.theverge.com/2016/3/24/ 11297050/tay-microsoft-chatbot-racist. Wang Y, Kosinski M. 2018. Deep Neural Networks Are More Accurate Than Humans at Detecting Sexual Orientation from Facial Images …
Risk and Readiness: The Impact of Automation on Provincial Labour Markets
R Wyonch – CD Howe Institute Commentary, 2018 – papers.ssrn.com
… 1 A particularly spectacular example of the limitations of software in interacting with humans is Microsoft’s Tay artificial intelligence (AI) “chatbot.” Launched in March 2016, Tay was intended to interact with people on Twitter and learn about the world through conversation …
Verbal Disinhibition towards Robots is Associated with General Antisociality
M Strait, V Contreras, CD Vela – arXiv preprint arXiv:1808.01076, 2018 – arxiv.org
… into its deployment in the United States (see Figure 1). • Microsoft’s Tay: In 2016, Microsoft launched a similarly ill-fated social experiment – deploying a chatbot (“Tay”) they had … Within 16 hours after its re- lease, Tay, which was designed to learn from its interactions, morphed …
Shall We Row The Boats Ashore?
C Tandy – Death And Anti-Death, 2018 – papers.ssrn.com
… Page 11. 262 before it went on the TV quiz show Jeopardy (and there proving itself more successful than any human); too, the Tay chatbot, apparently a fast learner of prejudice and hate (from human interlockers), had to be shut down by Microsoft after only 16 hours.10 …
An essay on human values in HCI
R Pereira, MCC Baranauskas, K Liu – SBC, 2018 – researchgate.net
… In March 2016, Microsoft® had to deactivate Tay (acronym of “thinking about you”), its chatbot, as it became racist, cited Hitler and started supporting Donald Trump’s immigration plans a few hours after it had been launched online …
The CHATBOT
M Facchini Amondarain – 2018 – upcommons.upc.edu
… 4.1.1.1. Entertainment chatbots Engineers have been developing chatbots for entertainment for decades, since the famous chatbot, the psychotherapist ELIZA, was introduced in 1966 … For example, the Microsoft’s bots Xiaoice and Tay meet this definition …
Liability: When Things Go Wrong in an Increasingly Interconnected and Autonomous World: A European View
JS Marcus – IEEE Internet of Things Magazine, 2018 – ieeexplore.ieee.org
… Who is liable? In 2016, for instance, Microsoft turned an AI chatbot named Tay loose on Twitter, in the hopes that Tay could learn how to mimic human conversational behavior based on the speech patterns of Twitter users. Unfortunately, Tay fell in with bad company …
Chatbot Personality and Customer Satisfaction
H de Haan, J Snijder, C van Nimwegen, RJ Beun – 2018 – research.infosupport.com
… Page 14. 9 Performance: Chatbot performance is measured in two ways. First, chatbots must be able to supply a conversational throughput that matches the demand. They must be able to process hundreds of requests per minute, with peak rates in the thousands …
Opportunities amid Disorder: Europe and the World in 2018
J Hackenbroich, J Shapiro – 2018 – ecfr.eu
… at https://www.theverge. com/2016/3/24/11297050/tay-microsoft-chatbot-racist. 9 “AlphaGo Zero: Learning from scratch”, DeepMind, available at https://deepmind. com/blog/alphago-zero-learning-scratch/. 10 Tom Simonite, “Google’s …
Digital footprints: an emerging dimension of digital inequality
M Micheli, C Lutz, M Büchi – Journal of Information …, 2018 – emeraldinsight.com
… Various examples, such as Microsoft’s Tay chatbot (Neff and Nagy, 2016), have demonstrated how algorithms can become problematic, raising ethical … and gender stereotypes, can be embedded into AI systems across search engines (Noble, 2018), chatbots (Schlesinger et al …
Developing Bots with Microsoft Bots Framework
S Machiraju, R Modi – 2018 – Springer
… called a graphical user interface (GUI), which was popularized by Xerox, Apple, and Microsoft during the … bots can sense the tone of the conversation and respond accordingly, which chat bots cannot do … If you ask a chat bot an unrelated or random question, it might just deny the …
Open-domain neural conversational agents: The step towards artificial general intelligence
S Arsovski, S Wong, AD Cheok – International Journal of …, 2018 – openaccess.city.ac.uk
… open- domain dialogue system are presented in [9]. Open-domain conversational agent, Microsoft Tay bot is … Tay was created to learn through interactions with human users on Twitter, but it … of chatbots and numerous surveys over the past several decades, chatbot technology is …
Human–Bot Ecologies
D Guilbeault, J Finkelstein – A networked self and human …, 2018 – taylorfrancis.com
… Tay’s racism, in other word, is our fault. Later, on March 25, 2016, Peter Lee, VP of Microsoft Research, posted the following apology on the Official Microsoft Blog: As many of you know by now, on Wednesday we launched a chatbot called Tay …
Cybersecurity and AI
S Mohanty, S Vyas – How to Compete in the Age of Artificial Intelligence, 2018 – Springer
… Yet, chatbots also bring risks. Microsoft introduced Tay, a chatbot to engage people through “casual and playful conversation,” speak like millennials, learning from the people it interacted with on Twitter and the messaging apps Kik and GroupMe …
Possible Risks in Social Networks: Awareness of Future Law-Enforcement Officers
E Butrime, V Zuzeviciute – World Conference on Information Systems and …, 2018 – Springer
… http://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist. 8. Butrime, E., Zuzeviciute, V.: Students’/future law – enforcement officers’ perspective on safety in the digital space // Kultura bezpiecze?stwa : Nauka – Praktyka – Refleksje …
Carlos A. Scolari
FP de Sá – MATRIZes, 2018 – periodicos.usp.br
… Also, there are chatbots in social media that are made by designers and engineers who work with artificial intelligence, and these same bots can behave imitating the worst of the human beings, as was the case of the Tay chatbot developed by Microsoft (Vincent, 2016), which in …
Building brand resonance with chatbots: assessing the importance of giving your bot a human personality
V Verney, A Poulain – 2018 – biopen.bi.no
… One can draw a parallel between the personality of a brand expressed through its employees and this personality expressed through a chatbot. Chatbots can be efficient to create a bond between brands and consumers just as CRM and 1014110 1014092 GRA 19502 Page 11 …
Social, Economic, and Ethical Consequences of AI
BR Hussein – researchgate.net
… http://justhinter.com/artificial-intelligen ce-based-chatbots/. [Accessed: 16-Feb-2018]. [8] “Twitter taught Microsoft’s AI chatbot to be a racist asshole in less than a day – The Verge.” [Online]. Available: https://www.theverge.com/2016/3/24/ 11297050/tay-microsoft-chatbot-racist …
Pursuing Diversity and Inclusion in Technical Services
D Hodges – Serials Review, 2018 – Taylor & Francis
… 2011121601539.html Vincent, J. (2016). Twitter taught Microsoft’s AI chatbot to be a racist … in less than a day. The Verge. Retrieved from https://www.theverge.com/2016/3/24/ 11297050/tay- microsoft-chatbot-racist. SERIALS REVIEW 175
User Experiences with Personal Intelligent Agents: A Sensory, Physical, Functional and Cognitive Affordances View
S Moussawi – Proceedings of the 2018 ACM SIGMIS Conference on …, 2018 – dl.acm.org
… in these cases being inadequate [18]. Additionally, various recent applications emphasize the need for some ethical guidelines. For instance, Microsoft’s Tay is a chatbot that learned from Twitter feed and turned into a racist …
AI In Weapons: The Moral Imperative For Minimally-Just Autonomy
J Scholz, J Galliott – unsworks.unsw.edu.au
… Many of these undoubtedly go unreported for commercial and classification reasons, but Microsoft’s Tay AI Bot, a machine learning chatbot that learns from … After a short period of operation, Tay developed an ‘ego’ or ‘character’ that was strongly sexual and racialized, and …
24 Communication Technology and Perception
DJ Gunkel – Communication and Media Ethics, 2018 – books.google.com
… Tay’s racism, in other word, is our fault. Later, on Friday, March 25, Peter Lee, vice president of Microsoft Research, posted the following apology on the Official Microsoft Blog: As many of you know by now, on Wednesday we launched a chatbot called Tay …
Faster is Not Always Better: Understanding the Effect of Dynamic Response Delays in Human-Chatbot Interaction.
U Gnewuch, S Morana, MTP Adam, A Maedche – ECIS, 2018 – aisel.aisnet.org
… and Maedche, Alexander, “Faster is Not Always Better: Understanding the Effect of Dynamic Response Delays in Human-Chatbot Interaction” (2018 … Another company that incorporates AR in their pro- duction operations is Volvo, which uses Microsoft HoloLens in their assembly …
The political economy of bots: Theory and method in the study of social automation
S Woolley – The Political Economy of Robots, 2018 – Springer
… 2016: Facebook launches a bot platform for Messenger. 2016: CEO of Messenger app Kik predicts a “bot goldrush.”. 2016: Microsoft’s Twitterbot “Tay,” billed as an AI chatbot, is fooled into publicly tweeting racist, misogynist, and generally offensive content …
Efficient repair of polluted machine learning systems via causal unlearning
Y Cao, AF Yu, A Aday, E Stahl, J Merwine… – Proceedings of the 2018 …, 2018 – dl.acm.org
… The most recent real-world example is Microsoft’s AI powered chatbot Tay. Tay learned racism because some Twit- ter users interacted with Tay using offensive, racist words, and these words were included in Tay’s training set [36] …
State-of-the-Art Approaches for German Language Chat-Bot Development
N Boisgard – 2018 – ec.tuwien.ac.at
… Figure 1.2: The number of publications containing the terms “chatbot”, “chatterbot”, “chat-bot” or “chat bot” released in the … number of publications has risen significantly after several announcements in 2016, with the release of Microsoft’s first Twitter bot Tay [Dewey, 2016 …
A Computational and Biological Approach to Machine Learning
JR Lindbergh III – 2018 – jlindbergh.com
… in data, and so: amplify the human society biases and problems.” Ciucci theory was realized in the case of the conversational AI, Tay, that began to post inflammatory comments hours after its launch. Unfortunately, this is an inescapable hazard of machine learning …
From Big Data to Deep Learning: A Leap Towards Strong AI or ‘Intelligentia Obscura’?
S Strauß – Big Data and Cognitive Computing, 2018 – mdpi.com
… from web search engines (eg, Google), text and image recognition in social media (eg, on Facebook), language translation (eg, deepl.com), self-driving cars, sophisticated industry robots, some chat bots and speech assistant systems (eg, Apple’s Siri, Microsoft’s Cortana, or …
Ten guidelines for intelligent systems futures
D Loi – Proceedings of the Future Technologies Conference, 2018 – Springer
… The Verge. https://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist. Accessed 24 Mar 2016. 9. Wang, Y., Kosinski, M.: Deep neural networks are more accurate than humans at detecting sexual orientation from facial images. J. Pers. Soc. Psychol …
SJ and FR. Oswald AJ Mascarenhas, SJ (2018).’
FROAJ Mascarenhas – Artificial Intelligence and the Emergent Turbulent … – emerald.com
… Microsoft Launches “TAY”. On March 23, 2016, Microsoft Corporation launched “Tay,” a Twitter chatbot that used artificial … Microsoft kept deleting its tweets, as it became more and more racially abusive, until they had to finally shut it down within an extremely short span of time …
Virtually Free Speech: The Problem of Unbridled Debates on Social Media
B Parker – INTUITION – scholarsarchive.byu.edu
… 12 [2017], Iss. 2, Art. 16 https://scholarsarchive. byu. edu/intuition/vol12/iss2/16 Page 110. 108 Virtually Free Speech: The Problem of Unbridled Debates on Social Media On March 23, 2016, Microsoft released an artificial intelligence (AI) chatbot on Twitter known as Tay …
What is the proper scope of the law: all but only humans, or all socially recognized agents?
S Arman – WSJ, 2018 – warwick.ac.uk
… Microsoft’s Twitter chatbot–Tay–turned into a “racist, pro-Hitler troll with a penchant for bizarre conspiracy theories” (Johnston, 2017). Showing meaning is embedded within language; its recognition in ‘infant’robots is similar to adolescents …
Empathy and virtual agents for learning applications in symbiotic systems
RMG Iranzo, N Padilla-Zea… – 2018 IEEE Global …, 2018 – ieeexplore.ieee.org
… To do it, we recommend a quick 1 https://www.theverge.com/2016/3/24/11297050/tay-microsoft- chatbot-racist 2 https://www.iso.org/obp/ui/#iso:std:iso-iec:25010:ed-1:v1:en 3 http:// cultureofempathy.com/References/Test.htm reasoner based in a behavior ontology …
The Ugliness of Trolls: Comparing the Methodologies of the Alt-Right and the Ku Klux Klan
N Eckstrand – Cosmopolitan Civil Societies: an Interdisciplinary …, 2018 – epress.lib.uts.edu.au
… Similarly, the Microsoft chat-bot Tay, an artificial intelligence programmed to interact and learn from Twitter users, ‘learned’ in less than a day to spout conspiracy theories, speak admirably of Hitler, and compare President Obama to a monkey (Hunt, 2016) …
Conversational marketing: Creating compelling customer connections
N Sotolongo, J Copulsky – Applied Marketing Analytics, 2018 – ingentaconnect.com
… Microsoft’s vision for the future is built on conversation as a platform, taking on the professional world … of the movie, ‘Teenage Mutant Ninja Turtles: Out of the Shadows’, Paramount launched chatbot characters from … Chatbots can also be used to sign customers up for services …
Deep learning in ophthalmology: a review
PS Grewal, F Oloumi, U Rubin, MTS Tennant – Canadian Journal of …, 2018 – Elsevier
… Such detrimental outcomes can be illustrated by a recent example. Tay is Microsoft’s AI chat-bot that was setup on Twitter to learn from the existing tweets on how to communicate and tweet for itself. 32 Tay’s very first tweet was …
Artificial Intelligence Education Ethical Problems and Solutions
L Sijing, W Lan – … on Computer Science & Education (ICCSE), 2018 – ieeexplore.ieee.org
… In less than a day, the chatbot were taught to be racist robots.[8] On the basis of ensuring the accuracy and comprehensiveness of training data, we must also pay attention to the existing correlation between the data … Microsoft’s AI Tay offends and goes offline; Deepdrumpf AI …
Chatbots as an approach for a faster enquiry handling process in the service industry
AAA Weißensteiner – Signature, 2018 – modul.ac.at
… The example of Microsoft Tay, which started to tweet racist tweets just days within its launch, demonstrates that hackers tried to abuse chatbots to spread racist, sexist or other offensive news. Furthermore, some user’s willingness to use a chatbot could be held back for the …
What algorithms want: imagination in the age of computing
T Fu – 2018 – Taylor & Francis
… and power. While algorithms as the result of machine learning have acquired enough information about the user’s digital footprint to make predictions, or to talk as Microsoft’s chatbot, Tay, Finn raises a timely question. That is …
Non-Autonomous Artificial Intelligence Programs and Products Liability: How New AI Products Challenge Existing Liability Models and Pose New Financial Burdens
G Swanson – Seattle UL Rev., 2018 – HeinOnline
… their activity.”). 12. Hope King, After Racist Tweets, Microsoft Muzzles Teen Chat Bot Tay, CNN (Mar. 24, 2016), http://money.cnn.com/2016/03/24/technology/tay-racist- microsoft/?ild=EL [https://perma.cc/ 4FDN-4LEL]. 13. Peter …
Workshop Report: The Governance of Decision-Making Algorithms
K Bejtullahu Michalopoulos – 2018 – infoscience.epfl.ch
… physical world, or us’ (Smith, 2018). As such, DMLAs can potentially behave in unpredictable ways –as Microsoft’s chat bot Tay did (Angulo, 2018) – and also be intentionally misused. If ‘learned behaviour’ increasingly becomes a …
Identification of future signal based on the quantitative and qualitative text mining: a case study on ethical issues in artificial intelligence
YJ Lee, JY Park – Quality & Quantity, 2018 – Springer
… Some noteworthy incidents involve a child injury by security robot at Palo Alto shopping mall, 3 malicious behavioral learning by Microsoft chat bot ‘Tay’, 4 and the death of an individual involving Tesla’s self-driving car … Samuel Gibbs, ‘Microsoft’s racist chatbot returns with …
Finding Good Representations of Emotions for Text Classification
JH Park – arXiv preprint arXiv:1808.07235, 2018 – arxiv.org
… a friend and making a conversation. When training NLP models, such as chatbots, things do not always go as intended. Famous incident of Microsoft chatbot Tay, which learned directly from users’ tweets with- out any filtering …
Algorithms, platforms, and ethnic bias: An integrative essay
S Silva, M Kenney – Phylon (1960-), 2018 – JSTOR
… Thus, Microsoft’s operating system (OS) was a platform that allowed computer makers to build machines that could, through the OS platform, interface with software built by developers. The application software allows users to perform useful func- tions …
Designing machines with autonomy: from independence to interdependence to solidarity
Y LIU, L PSCHETZ – research.ed.ac.uk
… lead to controversial outcomes. A good example is the collapse of Microsoft’s Tay. Released on Twitter on March 23, 2016, Tay was an AI chatbot created for the purposes of engagement and entertainment. Tay’s behaviour was …
Robot Criminals
Y Hu – U. Mich. JL Reform, 2018 – HeinOnline
… harm to hu- mans: self-driving cars claimed their first death in 2016;6 automat- ed trading allegedly triggered a recent crash in the United States stock market in 2018; 7 and Tay, a “chat bot,” repeatedly made … 8. Daniel Victor, Microsoft Created a Twitter Bot to Learn From Users …
Automatic Conversation Review for Intelligent Virtual Assistants
IR Beaver – 2018 – digitalrepository.unm.edu
… xviii Thesis Statement 1 1 Introduction 2 2 Background 7 2.1 Chatbot or IVA … with users.” [2] IVAs are commonly used for answering questions and task optimization as in the case of Apple’s Siri, Microsoft’s Cortana, or Google Now. However, many companies …
Interactive Advising with Bots: Improving Academic Excellence in Educational Establishments
W Nwankwo – Wilson Nwankwo. Interactive Advising with Bots …, 2018 – papers.ssrn.com
… In a similar vein, Persiyanov [32] had recognized two major types of chatbots which he called dialogue systems: goal-oriented (Siri, Alexa, Cortana, etc.) and general conversation bots (Microsoft Tay bot). According to Sansonnet et al [33], the functions of a ChatBot may be …
The AI delusion
G Smith – 2018 – books.google.com
Page 1. Page 2. The AI Delusion Page 3. “Data professionals and consumers can benefit from Smith’s entertaining and accessible demonstration that more computing power and more data do not imply more intelligence. We need to have more confidence in our human intellect …
Gendering of AI/Robots: Implications for Gender Equality amongst Youth Generations
E Engstrom – Report written by Eugenia Novoa (Speaker) …, 2018 – arielfoundation.org
… done-up hair-does7. Another example, is the AI chatbot Tay gendered as a female, launched by Microsoft in 2016 on Twitter to interact with the community, only to be taken down 16 hours later17. Tay would formulate her responses …
Research Ethics in Machine Learning
C Collectif – 2018 – hal.archives-ouvertes.fr
… 16 ii.2 chatbots Chatbots, or conversational agents, are software agents that can automatically process natural … Moreover, chatbot behavior is conditioned by training data … In April 2016, Microsoft’s Tay chatbot, which had the capacity to learn continuously from its interactions with …
Configuring a quotation through a conversational agent
F Gundersen – 2018 – theseus.fi
… though there has been a lot of effort on moving closer to open-domain chatbots. Natural language processing or NLP is a field of science that is aims to break down and … Page 19. 14 Microsoft released a demo of a chatbot that uses machine learning that lives in Twitter …
Hallucination Machines
A Widdoes – 2018 – digitalscholarship.unlv.edu
… The Economist). Another article was on Tay, Microsoft’s first AI chatbot on Twitter, and how over the period of 24 hours it quickly assimilated hate speech and far right ideologies into its dialogue with users (West). I began thinking …
Symbiotic Artificial Intelligence and Its Challenges in Cybersecurity and Malware Research
AN Merrill – 2018 – search.proquest.com
Page 1. SYMBIOTIC ARTIFICIAL INTELLIGENCE AND ITS CHALLENGES IN CYBERSECURITY AND MALWARE RESEARCH by Anthony N. Merrill A Capstone Project Submitted to the Faculty of Utica College December 2018 …
Marketplaces, markets, and market design
AE Roth – American Economic Review, 2018 – aeaweb.org
… from those that it encountered on Twitter. See, eg, Christopher Heine, “Microsoft’s Chatbot ‘Tay’ Just Went on a Racist, Misogynistic, Anti- Semitic Tirade,” AdWeek, March 24, 2016. One of the founders of Twitter, Evan Williams …
From GOODBOT to BESTBOT
O Bendel – 2018 AAAI Spring Symposium Series, 2018 – aaai.org
… This watchfulness can be supported by the design of the chatbot. The BESTBOT, just like the GOODBOT, can emphasize that it is only a machine (meta rule 1), and can request the user to verify statements periodically … The most popular one was Tay by Microsoft …
Attacks and Defenses towards Machine Learning Based Systems
Y Yu, X Liu, Z Chen – Proceedings of the 2nd International Conference …, 2018 – dl.acm.org
… caused Tay to tweet highly offensive content that was racist, sexual, drug- related, and abusive in nature merely by tweeting content like that to Tay … 3] https://www.theguardian.com/technology/ 2016/mar/26/microsoft-deeply- sorry-for-offensive-tweets-by-ai-chatbot [4] Huang …
Algorithmic challenges to autonomous choice
MS Gal – Mich. Tech. L. Rev., 2018 – HeinOnline
Page 1. ALGORITHMIC CHALLENGES TO AUTONOMOUS CHOICE Michal S. Gal * “I never think of the future. It comes soon enough… Albert Einstein I. IN TRODU CTION ….. 60 I1. CHOICE …
Machinic rhetorics and the influential movements of robots
MC Coleman – Review of communication, 2018 – nca.tandfonline.com
Skip to Main Content …
New Research and Future Directions
A Vieira, B Ribeiro – Introduction to Deep Learning Business Applications …, 2018 – Springer
… In China, where the experiment was first launched, the bot was successful. But in the United States, the bot become sexist, racist, and xenophobe ( https://www. theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist ) …
Bias Amplification in Artificial Intelligence Systems
K Lloyd – arXiv preprint arXiv:1809.07842, 2018 – arxiv.org
… Similarly, interaction bias occurs when machines are empowered to learn without safeguards in place to identify and exclude harmful or pernicious be- liefs; Microsoft’s racist and anti-Semitic chatbot Tay that had to be shut down after 24 hours of humans teaching it racial slurs is …
Reinforcement Learning and Robotics
A Vieira, B Ribeiro – Introduction to Deep Learning Business Applications …, 2018 – Springer
… probably with some human supervision to prevent them from “inappropriate behavior,” like what happened to the Microsoft Twitter chatbot, Tay ( https://en.wikipedia.org/wiki/Tay_(bot) ). Most big companies are using, testing, or considering the implementation of chatbots in their …
AI in Marketing, Sales and Service: How Marketers Without a Data Science Degree Can Use AI, Big Data and Bots
P Gentsch – 2018 – books.google.com
… 6 Possible Limitations of AI-Based Bots 4.1. 7 Twitter Bot Tay by Microsoft Conversational Commerce 4.2. 1 Motivation and Development 4.2 … 6 Three Takeaways to Work on When Creating Your Chatbot Alexa Becomes Relaxa at an Insurance Company 5.7 …
Internet of Personalized and Autonomous Things (IoPAT): Smart Homes Case Study
S Elmalaki, Y Shoukry, M Srivastava – … Workshop on Smart Cities and Fog …, 2018 – dl.acm.org
… in the interaction with IoPAT systems. For instance, Microsoft Twitter chatbot, Tay [27], is an infamous example of how artificial intelligence agents can be fooled by humans to perform unexpectedly. In the context of the smart …
Collective behavior of social bots is encoded in their temporal twitter activity
A Duh, M Slak Rupnik, D Korošak – Big data, 2018 – liebertpub.com
Behind the Algorithm
J Svensson – … Research and Education Association), 7th European …, 2018 – ls00012.mah.se
… Page 3. Algorithms are on the Agenda Is Amazon homophobic? (Striphas, 2015) Is Google Play homophobic? (Ananny, 2011) Microsoft’s chat bot Tay (Neff & Nagy, 2016) Page 4. Algorithms are on the Agenda Gender biases in image search algorithms (Kay et al, 2015) Page 5 …
On the Media-political Dimension of Artificial Intelligence
A Sudmann – Digital Culture & Society, 2018 – degruyter.com
… Facebook, Microsoft, and many other IT companies basically have the same agenda (cf. Boyd 2017) … There was great turmoil when Micro- soft’s chatbot “Tay” was trained by Twitter users to learn racist, sexist, as well as anti-Semitic statements (Vincent 2016) …
Four perspectives on human bias in visual analytics
E Wall, LM Blaha, CL Paul, K Cook, A Endert – Cognitive biases in …, 2018 – Springer
… While a vulnerability in Tay was exploited, the chatbot nonetheless conveys what can happen when human bias is introduced unchecked into a system … Lee P (2016) Learning from Tay’s introduction. https://blogs.microsoft.com/blog/2016/03/25/learning-tays-introduction/. 29 …
An Intelligent Decision-Making System for Autonomous Units Based on the Mind Model
Z Kowalczuk, M Czubenko – … Methods & Models in Automation & …, 2018 – ieeexplore.ieee.org
… [19] E. Hunt, “Tay, Microsoft’s AI chatbot, gets a crash course in racism from Twitter — Technology — The Guardian,” 2016. [20] PH Kahn, Jr., B. Friedman, DR Pérez-Granados, and NG Freier, “Robotic pets in the lives of preschool children,” Interaction Studies, vol. 7, pp …
What Every Policymaker Needs to Know
P Scharre, MC Horowitz – 2018 – indexinvestor.com
Page 1. JUNE 2018 ARTIFICIAL INTELLIGENCE What Every Policymaker Needs to Know Paul Scharre and Michael C. Horowitz Preface by Robert O. Work Page 2. About the Authors About the Technology & National Security Program Technology is changing our lives …
Artificial Intelligence and the Internet of Things
M Bunz, L Janciute – 2018 – uwestminsterpress.co.uk
… challenges which demand new policies. The thirty cases studied ranged from a smartphone and an activity tracker (Fitbit) to an intelligent personal assistant (Alexa, Siri), a chatbot (Microsoft’s Tay), a self-driving car (Tesla, Google-Waymo) to a self-service checkout …
Cognitive Automation as Part of Deakin University’s Digital Strategy.
R Scheepers, MC Lacity… – MIS Quarterly …, 2018 – search.ebscohost.com
… See Hern, A. “Microsoft scrambles to limit PR damage over abusive AI bot Tay,” March 24, 2016 … The most recent CA application is called Genie,33 a platform made up of chatbots, artificial intelligence (eg, Watson), voice recognition and predictive analytics.34 Genie is …
Adjudication by algorithm
C Piovesan, V Ntiri – marcomm.mccarthy.ca
… Interaction bias exists when the user biases the algorithm by in- teracting with it. A recent example is Microsoft’s Twitter-based chatbot called “Tay,” which was programmed to learn from the behaviour of other Twitter users …
Glitching the Fabric: Strategies of new media art applied to the codes of knitting and weaving
DNG McCallum – 2018 – gupea.ub.gu.se
Page 1. David NGMcCallum STRATEGIES OF NEW MEDIA ART APPLIED TO THE CODES OF KNITTING AND WEAVING Glitching the Fabr?c Page 2. Page 3. David NGMcCallum STRATEGIES OF NEW MEDIA ART APPLIED TO THE CODES OF KNITTING AND WEAVING …
Our New Handshake with the Robot
M Remarczyk, P Narayanan, S Mitrovic… – Proceedings of SAI …, 2018 – Springer
… In other words, 16% of projects were of a nature that engage employees and customers using natural language processing chatbots, intelligent agents, and machine learning. 1.2 It – The Machine … Microsoft, for example, released its automated chat-bot Tay (Thinking About …
The artificial intelligence imperative: a practical roadmap for business
A Lauterbach, A Bonime-Blanc – 2018 – books.google.com
… and “buggy” applications. Microsoft’s Tay chatbot represents a good example of the perils of such new products. Tay was a Twitter-based system designed to learn how to interact using examples of what people said to it. In a …
Games between humans and AIs
SJ DeCanio – AI & SOCIETY, 2018 – Springer
… Microsoft’s “Tay,” a chat bot intended to “mimic the verbal tics of a 19-year old American girl,” was coxed by Twitter users “into regurgitating some seriously offensive language, including pointedly racist and sexist remarks.” Tay was quickly taken offline (Alba 2016). 16 …
Potential Impact of Automation Technology in the IS/IT Customer Service Industry
J Feith – 2018 – scss.tcd.ie
… rules and flows (Menon, 2017). Again, in the context of customer service, this means that chatbots are mostly suitable for straightforward and simple processes that are repetitive … humans can. An case in which AI went wrong was Microsoft’s chatbot “Tay”, which learnt …
Protecting your patients’ interests in the era of big data, artificial intelligence, and predictive analytics
P Balthazar, P Harri, A Prater, NM Safdar – Journal of the American College …, 2018 – Elsevier
… Indeed, the potential for this was realized in the popular imagination when Microsoft created an artificial intelligence Twitter chatbot that started posting racist and genocidal tweets within 24 hours of “learning” from human-generated text [59] …
Conversation Design
S Rozga – Practical Bot Development, 2018 – Springer
… There are more rules you may pick up on as you gain experience in this space across different messaging channels, but this list is a good starting point and something I suggest we follow on every chat bot project … 13. Microsoft Silences its new AI Bot Tay: https://techcrunch …
Platformed racism: The Adam Goodes war dance and booing controversy on twitter, YouTube, and Facebook
A Matamoros-Fernandez – 2018 – eprints.qut.edu.au
Page 1. PLATFORMED RACISM: THE ADAM GOODES WAR DANCE AND BOOING CONTROVERSY ON TWITTER, YOUTUBE, AND FACEBOOK Ariadna Matamoros-Fernández BA Autonomous University of Barcelona MA University of Amsterdam …
Legal Cases: Argumentation versus ML
T BENCH-CAPON – cgi.csc.liv.ac.uk
… ACM, 1995. [27] R. Price. Microsoft is deleting its AI chatbot’s incredibly racist tweets, 2016. http://uk.businessinsider.com/microsoft-deletes-racist-genocidal-tweets-from-ai-chatbot-tay-2016- 3. [28] EL Rissland and MT Friedman. Detecting change in legal concepts …
Customer experience challenges: bringing together digital, physical and social realms
RN Bolton, JR McColl-Kennedy… – Journal of Service …, 2018 – emeraldinsight.com
Natural language generation for commercial applications
A van de Griend, W OOSTERHEERT, T HESKES – 2018 – ru.nl
… 29 6.3 Evaluating chatbots … Even the recent controversy surrounding Microsoft’s chatbot Tay (Vincent, 2016) makes it seem like NLG is not some concept from science fiction anymore. However, the actual functioning of these applications is not publicly known …
The current state of artificial intelligence and the information profession: Or do librarian droids dream of electric books?
HP Kirkwood – Business Information Review, 2018 – journals.sagepub.com
Skip to main content …
Involving of Artificial Intelligence in Committing a Crime as a Challenge to the Criminal Law of the Republic of Serbia
AR Ivanovic, ZS Pavlovic – JE-Eur. Crim. L., 2018 – HeinOnline
… intended. This was the case with the so-called Tay, the chatbot Microsoft unleashed on Twitter and other social platforms two years ago that quickly turned into a racist, sex-crazed neo-Nazi (Clauss6n-Karlsson, 2017:18). A second …
Social Services for Digital Citizens: Opportunities for Latin America and the Caribbean
C Pombo, R Gupta, M Stankovic – 2018 – books.google.com
… Alan Boyle, 2016, Microsoft’s Chatbot Gone Bad, Tay, Makes MIT’s Annual List Of Biggest Technology Fails, at https://www.geekwire.com/2016/microsoft-chatbot-tay-mit-technology-fails. Kevin C. Desouza and Rashmi Krishnamurthy, 2017, Chatbots Move Public Sector Towards …
Data-driven meets theory-driven research in the era of big data: opportunities and challenges for information systems research
W Maass, J Parsons, S Purao, VC Storey… – Journal of the …, 2018 – aisel.aisnet.org
… rates for white offenders. In another application domain, the Tay chatbot was launched by Microsoft in 2016 to embed machine learning as a way to engage in realistic conversations with Twitter users. It was quickly shut down …
Human-Aided Bots
P Kucherbaev, A Bozzon… – IEEE Internet Computing, 2018 – ieeexplore.ieee.org
… To address this issue, Microsoft relies on proven crowd-workers with whom nondisclosure agreements are signed … and P. Nagy, “Talking to bots: Symbiotic agency and the case of Tay,” Int … Z. Yu, Z. Xu, AW Black, and AI Rudnicky, “Chatbot evaluation and database expansion via …
Being Alone with Yourself is Increasingly Unpopular: The Electronic Poetry of Jenny Holzer
L Jones – Journal of Narrative Theory, 2018 – muse.jhu.edu
… Like Microsoft’s chatbot Tay who learned to be a racist from Twitter in less than twenty-four hours (Vincent), some users try to interrupt the cultural and aesthetic critique and language play of participants; for instance, in response to “A Little Knowledge Can Go A Long Way” users …
Unpacking the social media bot: A typology to guide research and policy
R Gorwa, D Guilbeault – Policy & Internet, 2018 – Wiley Online Library
… interact directly with human users (see, for instance, the infamous case of Microsoft’s “Tay” in Neff … a specific purpose, and not automated agents in the sense of a chatbot or social … are not designed to influence users through direct communicative activities, and chatbots are often …
How do we Develop Ethically Aware AI?
M Luthra – 2018 – eprints.illc.uva.nl
… ronment? This led me to reflect on Tay, the Microsoft chat-bot that turned misogynist and racist when left to learn from its environment (I will discuss Tay in further detail in chapter three) … explicit ethical agent. On the other hand, a chatbot seems to be a more plausible candidate …
Deep Neural Language Generation with Emotional Intelligence and External Feedback
V Srinivasan – 2018 – search.proquest.com
… interpret in more human ways. Ebay has it’s own chatbot that acts as a shopping assistant and helps customers in … customer base in case of customer service chatbots … Microsoft’s Tay was designed to explore conversational understanding and was hoped …
Discourses on Languages and Identities in Readers’ Comments in Ukrainian Online News Media: An Ethnolinguistic Identity Theory Perspective.
R Horbyk – East/West: Journal of Ukranian Studies, 2018 – search.ebscohost.com
… hen an artificial intelligence chatbot was released by Microsoft in March 2016 to learn human conversation patterns through … 2 (2018) 10 mimicking Twitter users, it took less than a day for it to start reproducing racist rants (“Tay, Microsoft’s”) …
Artificial intelligence and human development: toward a research agenda
M Smith, S Neupane – 2018 – idl-bnc-idrc.dspacedirect.org
… Learn about effective regulatory models. Document and assess AI regulatory models developed to deal with the emergence of new AI-driven activities such as predictive policing, autonomous vehicles, and chatbots. Determine …
Is this the era of misinformation yet: combining social bots and fake news to deceive the masses
P Wang, R Angarita, I Renna – Companion Proceedings of the The Web …, 2018 – dl.acm.org
… For example, Tay, a Twitter chatbot, became “an evil Hitler-loving, incestuous sex- promoting, Bush did 9 … of Tay’s learning algorithm to promote offensive ideas, and the scientists behind Tay were not … Microsoft deletes ‘teen girl’ AI after it became a Hitler-loving sex robot within 24 …
Wild patterns: Ten years after the rise of adversarial machine learning
B Biggio, F Roli – Pattern Recognition, 2018 – Elsevier
Skip to main content …
A Developmental Model of Trust in Humanoid Robots
M Patacchiola – 2018 – pearl.plymouth.ac.uk
… and acquire new knowledge only from the reliable ones. An example of the importance of this point has been given by a recent accident involving an artificial agent. Microsoft released an experimental chat bot called Tay, with the purpose of engaging users in conversations …
The Economics of Artificial Intelligence
S Mohanty, S Vyas – How to Compete in the Age of Artificial Intelligence, 2018 – Springer
… There are AI solutions like chatbots that learn through interaction. The idea is fairly simple. You interact with the machine and machine in turn learns from you to form the basis of subsequent responses. Microsoft’s Tay, a Twitter-based chatbot, was designed to learn from its …
Artificial Agents and General Principles of Law
A von Ungern-Sternberg – German Yearbook of International Law …, 2018 – papers.ssrn.com
… foreseen by its creator, programmer or user. Consider the rather innocuous example of Microsoft’s chat bot “Tay”, a virtual user of social media platforms programmed to learn from other users. Microsoft had to disable Tay after only a few hours of operation when it quickly …
Attributions of morality and mind to artificial intelligence after real-world moral violations
DB Shank, A DeSanti – Computers in Human Behavior, 2018 – Elsevier
… The children’s videos event focused on an app intended for young children which showed violent or obscene videos and advertisements. The social media bot event involved Tay a Twitter bot tweeting racial slurs, showing hate, and supporting Hitler …
Beneficial AI: the next battlefield
E Oliveira – Journal of Innovation Management, 2018 – journals.fe.up.pt
… As simple as that. Even if we look at the present, we are not willing to replicate what happened with Microsoft Corporation Chatbot Tay that began to post offensive tweets, forcing Microsoft to shut down the service about 16 hours after its launch …
The Press Clause and Digital Technology’s Fourth Wave: Media Law and the Symbiotic Web
J Schroeder – 2018 – taylorfrancis.com
… and other virtual spaces, such as message boards and online customer service websites.2 Olivia Taters is classified as a “chatbot” or a … In one example, Microsoft’s Tay experiment in 2016, the corporation’s at- tempt to create an artificially intelligent entity that could learn from …
Algorithms and public service media
JK Sørensen, J Hutchinson – RIPE@ 2016, 2018 – vbn.aau.dk
… The ChatBot initiative is part of a complex, on-going transition at the ABC – from a traditional PSB organisation … The importance of getting automation right is evi- dent in the recent derailing of Microsoft’s foray into artificial intelligence (AI) with its multiplatform bot called ‘Tay’ …
A qualitative exploration of perceptions of algorithmic fairness
A Woodruff, SE Fox, S Rousso-Schindler… – Proceedings of the 2018 …, 2018 – dl.acm.org
… algorithmically aided decision-making. For example, Perez reported that Microsoft’s Tay (an artificial intelligence chatbot) suffered a coordinated attack that led it to exhibit racist behavior [65]. Researchers have also reported that …
Customer experience challenges: bringing together digital, physical and social realms Ruth N. Bolton, Janet R. McColl-Kennedy, Lilliemay Cheung, Andrew …
MZ Witell – ruthnbolton.com
… For example, Tay, the rogue chatbot that Microsoft developed, was a female chatbot with its own Twitter account. It is a machine learning project, designed for human engagement that communicates with 18-to 24-year-olds, learns from them and gets smarter with time …
Deceptive Machine Learning for Offense and Defense Targeting Financial Institutions
JJ Gurr – 2018 – search.proquest.com
… The rented computational power and resource in the cloud usually comes from the likes of Google, Amazon, and Microsoft. Google’s Cloud Machine … associated training data to be trained in the cloud. Microsoft in their offering of Azure Batch AI …
Disruptive Technology and the Ethical Lawyer
A McPeak – U. Tol. L. Rev., 2018 – HeinOnline
… the DoNotPay app, which automates some simple legal matters.49 Initially, the DoNotPay application used chatbots to automatically … Bias Problem, FORTUNE (June 25, 2018), http:// fortune.com/longform/ai-bias-problem/ (describing how Tay, a Microsoft chatbot that launched …
The Ethical and Legal Challenges of Artificial Intelligence: The EU response to biased and discriminatory AI
A Siapka – Available at SSRN 3408773, 2018 – papers.ssrn.com
… undisclosed.4 In the same year, Microsoft’s conversational AI (chatbot), Tay, made her debut.5 Tay ran on ML to hold sophisticated conversations on Twitter: the more people would converse with her, the better and more specific her responses would be. As soon as online trolls …
Managing the rise of Artificial Intelligence
R Mclay – Retrieved November, 2018 – tech.humanrights.gov.au
… of LUI News, which focuses on chatbot technologies, claims that narrow bots will be generating US623 billion business globally by 2020. There are many hundreds of thousands of developers working on chatbots world-wide including 30,000 on the Microsoft Skype platform …
11 Corporate social innovation
P Mirvis, B Googins – Business Strategies for Sustainability, 2018 – books.google.com
… It was targeted at American social media users 18–24-year-olds–and was, according to Microsoft–’designed to … Shortly thereafter, however, Tay was shut down … According to Louis Rosenberg, the founder of Unanimous AI,’like all chat bots, Tay has no idea what it’s saying …
The cyber decade: Cyber defence at a X-ing point
R Koch, M Golling – 2018 10th International Conference on …, 2018 – ieeexplore.ieee.org
… Microsoft’s Twitter chatbot “Tay” had to be shut down in 2016 after less than 24 hours because it began using racist language [65], while a team from MIT’s Computer Science and Artificial Intelligence Laboratory tricked Google’s AI into misidentifying pictures of turtles as …
Ethics as methods: doing ethics in the era of big data research—introduction
AN Markham, K Tiidenberg… – Social Media+ …, 2018 – journals.sagepub.com
This is an introduction to the special issue of “Ethics as Methods: Doing Ethics in the Era of Big Data Research.” Building on a variety of theoretical paradigm…
Future politics: Living together in a world transformed by tech
J Susskind – 2018 – books.google.com
Page 1. Page 2. FUTURE POLITICS Page 3. JAM E SUSS KIND FUTURE POLITICS irº. Itº EHººliº ºf Hºº. Page 4. Great Clarendon Street, Oxford, OX26DP, United Kingdom Oxford University Press is a department of the University of Oxford …
The Technological Gap Between Virtual Assistants and Recommendation Systems
D Rafailidis – arXiv preprint arXiv:1901.00431, 2018 – arxiv.org
… A Chatbot is a computer program that carries out a conversation through, whereas smart speakers … Microsoft recently reported that Cortana now has 133 million monthly users [26], estimating that … [10] James Davidson, Benjamin Liebald, Junning Liu, Palash Nandy, Tay- lor Van …
Evorus: A crowd-powered conversational assistant built to automate itself over time
THK Huang, JC Chang, JP Bigham – … of the 2018 CHI Conference on …, 2018 – dl.acm.org
… Microsoft Tey [45] introduced an AI-powered agent which encountered problems when deployed … PART I: LEARNING TO CHOOSE CHATBOTS OVER TIME Evorus’ chatbot selector learns over … Ranking and Sampling Chatbots Upon receiving a message from a user, Evorus …
A Tour of AI.
TN Dinh – 2018 – osti.gov
… Perhaps the most dramatic event occurred in 1997 when Microsoft’s Deep Blue Al became the first computer program to defeat the reigning world … first imagine that we have a list containing things that we generally want to call Al, things like spam detection, chatbots, and self …
Language Technology and 3rd Wave HCI: Towards Phatic Communication and Situated Interaction
L Borin, J Edlund – New Directions in Third Wave Human-Computer …, 2018 – Springer
… Microsoft’s Twitter bot Tay turning into “a Hitler-loving sex robot” in 24 hours (Horton 2016 … provides us with a possible explanation on both what went wrong with Watson, Tay and the … the results of this work incorporated in actual HCI applications, such as chatbots and spoken …
Machine learning for software analysis: Models, methods, and applications
A Bennaceur, K Meinke – … for Dynamic Software Analysis: Potentials and …, 2018 – Springer
Machine Learning (ML) is the discipline that studies methods for automatically inferring models from data. Machine learning has been successfully applied in many areas of software engineering…
Gatekeeping Algorithms with Human Ethical Bias: The ethics of algorithms in archives, libraries and society
M van Otterlo – arXiv preprint arXiv:1801.01705, 2018 – arxiv.org
… labels-to-show-if- news-is-true-or-false 14 https://www.forbes.com/sites/kalevleetaru/2017/02/ 17/how-twitters-new-censorship-tools-are-the- pandoras-box-moving-us-towards-the-end-of- free-speech/ 15 https://www.wired.com/2016/09/google-facebook-microsoft-tackle-ethics-ai …
It Is Going to Kill Us! and Other Myths About the Future of Artificial Intelligence.
RD Atkinson – IUP Journal of Computer Sciences, 2018 – search.ebscohost.com
… 40 Thomas Dietterich and Eric J Horvitz (2015), “Rise of Concerns About AI: Reflections and Directions”, Viewpoints: Communications of the ACM, Vol. 58, No. 10, available at http://research.microsoft.com/enus/ um/people/horvitz/CACM_Oct_2015-VP.pdf. Page 13. 19 …
Artificial Intelligence: Risks to Privacy and Democracy
KM Manheim, L Kaplan – Forthcoming, Yale Journal of Law and …, 2018 – papers.ssrn.com
… 12 In 2014, a chatbot fooled several human judges into thinking it was human … Consider Internet Protocol (IP) messaging platforms such as Microsoft’s Skype, Google’s Allo, Tencent’s WeChat, Facebook’s WhatsApp and Messenger, Naver’s Line, Apple’s iMessage, and stand …
Troll-Detection Systems Limitations of Troll Detection Systems and AI/ML Anti-Trolling Solution
U Bhatt, D Iyyani, K Jani, S Mali – 2018 3rd International …, 2018 – ieeexplore.ieee.org
… Within 24 hours “Tay” Microsoft’s artificial intelligence project modelled after an American teenager went from conversing awkwardly to spouting racist … anti-semitic-trolls”, last retrieved on 21st March 2017 [4] “https://thinkprogress.org/microsofts-lovable-teen-chatbot- turned-racist …
The Risk Of Not Changing Risk Perspectives On The Future Of Talent And Operating Model In Banks’ Risk Functions
AM Talamillo – Boletín de Estudios Económicos, 2018 – search.proquest.com
… 22 As an example of these undesired evolutions of self-trained algorithms, Tay, Microsoft’s expe- rimental Twitter chatbot was disconnected after going from “humans are super cool” to “Hitler was right i hate the jews” in just 24 hours …
Autonomy and Artificial Intelligence: The Future Ingredient of Area Denial
D Dash – 2018 – claws.in
… There are certain aI-based applications as analogous to ‘Chat Bots, such as Tay Bot13 and Virtual Personal assistants’ such as siri, Cortana, Google assistant and so forth … The perfect case was of Tay aI twitter bot by Microsoft Corporation, which was closed down …
M. Ross Quillian, Priming, Spreading-Activation and the Semantic Web
M Pace-Sigge – Spreading Activation, Lexical Priming and the …, 2018 – Springer
… 3. In spring 2016, Microsoft presented a “chatbot” called TAY which “learned” off Twitter feeds. Apparently, less than 24 hours after its launch, the “feeders” had turned TAY into a bot spouting racist rubbish. On a more serious …
People are averse to machines making moral decisions
YE Bigman, K Gray – Cognition, 2018 – Elsevier
Skip to main content …
Manufacturing Consensus: Computational Propaganda and the 2016 US Presidential Election
SC Woolley – 2018 – digital.lib.washington.edu
… University of Indiana. Some work at major publications in cities such as New York, San Francisco, and Los Angeles. There are also small research teams at Microsoft Research, UC Berkeley, and NYU that track bots. I built ties …
HELP WANTED
M Bogen, A Rieke – 2018 – apo.org.au
… Some tools engage candidates with chatbots and virtual interviews, and others use game-based assessments to reduce reliance on traditional (and often structurally biased) factors like university attendance, GPA, and test scores …
Human-Machine Teaming and Cyberspace
FJ Maymí, R Thomson – International Conference on Augmented Cognition, 2018 – Springer
… Adelphi Maryland (2015)Google Scholar. Metz, R.: Why Microsofts teen chatbot, Tay, said lots of awful things online, 24 March 2016. https://www.technologyreview.com/s/601111/why-microsoft- accidentally-unleashed-a-neo-nazi-sexbot/. Accessed 23 Feb 2018. Newhouse, B …
AI governance: A research agenda
A Dafoe – Governance of AI Program, Future of Humanity Institute …, 2018 – fhi.ox.ac.uk
Page 1. AI Governance: A Research Agenda Allan Dafoe Governance of AI Program Future of Humanity Institute University of Oxford First draft July 2017 v1.0 August 27 2018 Page 2. AI Governance: A Research Agenda Abstract1 …
Hardening quantum machine learning against adversaries
N Wiebe, RSS Kumar – New Journal of Physics, 2018 – iopscience.iop.org
… Author affiliations. 1 Microsoft, One Microsoft Way, Redmond WA 98052, United States of America … Perhaps one of the most notable examples of this is the Tay chat bot incident. Tay was a chat bot designed to learn from users that it could freely interact with in a public chat room …
Ethics in Social Autonomous Robots: Decision-Making, Transparency, and Trust
F Alaieri – 2018 – ruor.uottawa.ca
… safety. These trends are also apparent with bots, such as chatbots, personal assistant applications, and … 5- Absence of ethics and the presence of algorithmic bias has already led to artificial intelligence failures such as Microsoft’s catastrophic chatbot Tay (Bass, 2016) …
Changing Perspectives: Is It Sufficient to Detect Social Bots?
C Grimme, D Assenmacher, L Adam – International Conference on Social …, 2018 – Springer
… http://www.theguardian.com/commentisfree/2014/may/04/pro-russia-trolls-ukraine-guardian-online. 9. Ohlheiser, A.: Trolls turned tay, microsoft’s fun millennial ai bot, into … Pfaffenzeller, M.: Bundestagswahlkampf: CDU erwägt Einsatz von Chatbots, March 2017Google Scholar. 12 …
Moviegraphs: Towards understanding human-centric situations from videos
P Vicol, M Tapaswi, L Castrejon… – Proceedings of the …, 2018 – openaccess.thecvf.com
… The increasing interest in social chat bots and personal assis- tants [1, 4, 18, 22, 27, 42] points to the importance of teach- ing artificial agents to understand the subtleties of human social interactions. Towards this goal, we collect a novel dataset called MovieGraphs (Fig …
Data-driven HR: how to use analytics and metrics to drive performance
B Marr – 2018 – books.google.com
… One good example is virtual helpdesk agents – chatbots, essentially – that could answer simple employee questions such as: ‘When is the company closed over the Christmas break?’ or ‘How much of my annual leave have I used already this year?’ AI technology is now so …
Towards a deeper understanding of current conversational frameworks through the design and development of a cognitive agent
PP Angara – 2018 – dspace.library.uvic.ca
… the capabilities of an application. For example, Pizza Hut’s chatbot on Facebook … However, these responses can be highly varied and unexpected [23] (eg, Microsoft’s twitterbot Tay whose responses turned racist [4]). For conversational applications to …
The future of work: robots, AI, and automation
DM West – 2018 – books.google.com
Page 1. the future of work Robots, AI, and Automation Darrell M. west Page 2. The digiTal economy is here Robots, artificial intelligence, and driverless cars are no longer the stuff of futuristic visions. They are with us today and …
A taxonomy of software bots: towards a deeper understanding of software bot characteristics
CR Lebeuf – 2018 – venus.library.uvic.ca
… The personal assistant chatbot Julia (1994) was the first verbal chatbot [19 … Among the thousands of chatbots that exist today, some popular mainstream examples include: Apple’s Siri,13 Microsoft’s Cortana,37 Amazon’s Alexa,14 Google Assistant,15 Microsoft’s Tay,16 Cleverbot …
Artificial Intelligence: How Advance Machine Learning Will Shape The Future Of Our World
C Ahmet – 2018 – books.google.com
… Over time, major tech firms such as Google and Microsoft have moved to utilize specialized chips that are tailored to running and training machine-learning models. The best example of these custom chips is Tensor Processing Unit from Google …
Privacy’s Law of Design
AE Waldman – UC Irvine Law Review, 2018 – papers.ssrn.com
… They are built and sold by corporations, collections of real persons working toward shared goals37 that can be influenced by the people38 and ideas around them.39 New ideas at Microsoft, for example, are influenced by CEO Satya Nadella’s deep personal commitment to …
Artificial Intelligence In Marketing
J Cannella – 2018 – jamescannella.com
… 55 Page 4. ARTIFICIAL INTELLIGENCE IN MARKETING 4 Image Curation 56 Augmentation 57 Data Synergy 58 Chatbots 61 Customer Service 62 eCommerce 64 Personal Assistants 65 Chatbot Management 67 Personalized UI and UX 68 Voice 70 Personal Assistants 71 …
Analysing Seq-to-seq Models in Goal-oriented Dialogue: Generalising to Disfluencies.
S Bouwmeester – 2018 – esc.fnwi.uva.nl
… A marked example of this is Microsoft’s AI-based chat-bot named Tay, who was terminated after it became racist due to bad input from users. Common practices and defining characteristics of data are discussed below …
Creation and development of an AI teaching assistant
E Benedito Saura – 2018 – upcommons.upc.edu
… A curious episode with a Chatbot occurred on the 23rd of March of 2016, when Microsoft released Tay via … However, as it learned from any user’s inputs from the social media, Tay quickly developed vicious paranoia and had to be turned off in just 16 hours, since it started …
The Substitution of Labor: From Technological Feasibility to Other Factors Influencing Job Automation
R Teigland, J van der Zande, K Teigland… – The Substitution of …, 2018 – papers.ssrn.com
Page 1. The Substitution of Labor From technological feasibility to other factors influencing job automation Jochem van der Zande Karoline Teigland Shahryar Siri Robin Teigland Innovative Internet: Report 5 January 2018 Electronic …
Robotization of Work as Presented in Popular Culture, Media and Social Sciences (part two)
B Czarniawska, B Joerges – 2018 – gupea.ub.gu.se
… LSE researchers took up less known Stanley Milgram’s research5 on “cyranoids”: chatbots speaking through humans … Make them human-like, and equip them with human failings, or make them provoke human failings (6 instances) Microsoft AI Tay got itself a Twitter account …
Autopoiesis and Robopoetics in Gustavo Romano’s IP Poetry Project
S Weintraub – Latin American Technopoetics, 2018 – content.taylorfrancis.com
… Autopoiesis and Robopoetics 35 AI (teen girl) chatbot “Tay,” which was created and almost immediately “deleted” by Microsoft following a series of tremendously racist and misog- ynistic tweets (March 2016). 22 My translation. See also the often-cited bibliography on cyborg
BIG DATA: OR, THE VISION THAT WOULD NOT FADE
G Rieder – 2018 – pure.itu.dk
… innocuous smartphone apps sharing sensitive user information (see Goodin, 2015), Twitter chatbots turning racist overnight (see Vincent, 2016), the list could be continued almost indefinitely. But none of this was apparently enough to diminish what Kallinikos (2013) has …
The rise of emotion-aware conversational agents: threats in digital emotions
M Mensio, G Rizzo, M Morisio – Companion Proceedings of the The Web …, 2018 – dl.acm.org
… This agent was put online by Microsoft on Twitter for experimenting how it could learn from online … also all the threats linked to design and implementation issues that happened for example with Tay … by passing more and more time on a device (in the case of a chatbot) or with a …
Innovating Inclusion: The Impact of Women on Private Company Boards
JS Fan – Fla. St. UL Rev., 2018 – HeinOnline
Page 1. INNOVATING INCLUSION: THE IMPACT OF WOMEN ON PRIVATE COMPANY BOARDS JENNIFER S. FAN* ABSTRACT Eight percent-that is the percentage of women who serve on the boards of directors of private high technology companies …
Cloud Robotics Law and Regulation
E Fosch Villaronga, C Millard – Queen Mary School of Law Legal …, 2018 – papers.ssrn.com
… Joint Director of the Microsoft Cloud Computing Research Centre. Electronic copy available at: https://ssrn.com/abstract=3305353 Page 3 … We will not consider robots that have no physical embodiment, such as chatbots. Clouds, robots and cloud robotics …
Mobile advertising: an exploration of innovative formats and the adoption in the Italian market
E REINA – 2018 – politesi.polimi.it
… 81 3.6 CHATBOTS … 87 Figure 39- Example of application of chatbot in a Tommy Hilfiger campaign …
A study of the implications of advanced digital technologies (including AI systems) for the concept of responsibility within a human rights framework
K Yeung – Available at SSRN, 2018 – papers.ssrn.com
Page 1. MSI-AUT(2018)05 1 Committee of experts on human rights dimensions of automated data processing and different forms of artificial intelligence (MSI-AUT) 9 November 2018 MSI-AUT(2018)05 A study of the implications of advanced digital technologies …
Theorizing civic engagement and social media
J Uldam, A Kaun – Social Media Materialities and Protest: Critical …, 2018 – books.google.com
Page 176. 7 Theorizing civic engagement and social media The case of the “refugee crisis” and volunteer organizing in Sweden Julie Uldam and Anne Kaun Introduction Social media have been vested with hopes that they …
Archetypes of Artificial Intelligence Utilization–How companies create and capture value from a novel business technology
S Saukkomaa – 2018 – aaltodoc.aalto.fi
Page 1. ARCHETYPES OF ARTIFICIAL INTELLIGENCE UTILIZATION How companies create and capture value from a novel business technology Master’s Thesis Selim Saukkomaa Aalto University School of Business International Design Business Management Fall 2018 …
Into the Abyss™: Toward an understanding of sexual technologies as co-actors in techno-social networks
A Moyerbrailean – 2018 – diva-portal.org
… GENDER, VIRTUAL, ASSISTANTS, AND CYBERBODIES In recent years, there has been a surge of virtual assistants (VA) such as Apple’s Siri, Microsoft’s Cortana, Amazon’s Alexa, and Google’s Google Help … Tay, Jung, and Park (2014) found that a visually non-gendered robot …
Regulation of artificial intelligence in the United States
JF Weaver – Research Handbook on the Law of Artificial …, 2018 – elgaronline.com
Page 1. 155 7. Regulation of artificial intelligence in the United States John Frank Weaver I. INTRODUCTION We have long since passed the point of questioning whether artificial intelligence (AI) will be a significant factor in our day-to-day lives around the world …
Report on the Second Annual Workshop on Naval Applications of Machine Learning
K Rainey, J Harguess – 2018 – apps.dtic.mil
… ImageNet™ is a registered trademark of LLC Incorporated, MS-COCO 2014™ is a registered trademark of Microsoft Corporation Intel® is a registered trademark of Intel® Corporation Stratix® is a registered trademark of Stratix Corporation HyperFlex ™ is a registered trademark …
Emerging Library Technologies: It’s Not Just for Geeks
IA Joiner – 2018 – books.google.com
Page 1. Chandos Information Professional Series CP CHA-ND-Os Pue-is-Nº Emerging Library Technologies It’s Not Just for Geeks Ida Arlene Joiner Page 2. EMERGING LIBRARY TECHNOLOGIES Page 3. CHANDOS INFORMATION …
Click here to kill everybody: Security and survival in a hyper-connected world
B Schneier – 2018 – books.google.com
… Of course, not all software development processes are created equal. Microsoft spent the decade after 2002 improving its software development process to minimize the number of security vulnerabilities in shipped software …
Artificial Intelligence: Autonomous Technology (AT), Lethal Autonomous Weapons Systems (LAWS) and Peace Time Threats
R Surber – Reaching Critical Will. February, 2018 – ict4peace.org
… artificial-intelligence-will-rule-world.html (accessed on February 7, 2018); Metz, Cade, 2017, Google is already late to China’s AI revolution, February 2, 2017, Wired.com, available at: https://www.wired.com/2017/06/ai-revolution-bigger-google-facebook-microsoft/ (accessed on …
Depth Inclusion for Classification and Semantic Segmentation
M Lotz – 2018 – diva-portal.org
Page 1. IN DEGREE PROJECT COMPUTER SCIENCE AND ENGINEERING, SECOND CYCLE, 30 CREDITS , STOCKHOLM SWEDEN 2018 Depth Inclusion for Classification and Semantic Segmentation MAX LOTZ KTH ROYAL …
Spreading activation, lexical priming and the semantic web: early psycholinguistic theories, corpus linguistics and AI applications
M Pace-Sigge – 2018 – books.google.com
Page 1. SPREADING ACTIVATION, LEXICAL PRIMING AND THE SEMANTIC WEB Early Psycholinguistic Theories, Linguistics Corpus and AI Applications Michael Pace-Sigge Page 2. Spreading Activation, Lexical Priming and the Semantic Web Page 3 …
A study of EU data protection regulation and appropriate security for digital services and platforms
M Westerlund – 2018 – doria.fi
… Certain online service providers (eg Microsoft) have also launched subscription-based services that are advertisement free. These services, although primarily targeted to business users, offer the consumer an alternative to paying in data …
Diversity, Identification, and Rhetoric in Tech: On the Analysis of Satirical Conference Talks
B Knowles – 2018 – digitalcommons.wku.edu
… with the terrorists in the 2015 Paris attacks [28]. March the next year, Microsoft released a chat bot on Twitter that was intended … October 27, 2015. [19] –. “Microsoft Scrambles to Limit PR Damage over Abusive AI Bot Tay”. The Guardian US Edition …
Michael Pace-Sigge
L PRIMING – Springer
Page 1. SPREADING ACTIVATION, LEXICAL PRIMING AND THE SEMANTIC WEB Michael Pace-Sigge Early Psycholinguistic Theories, Corpus Linguistics and AI Applications Page 2. Spreading Activation, Lexical Priming and the Semantic Web Page 3. Michael Pace-Sigge …
Deep Semantic Learning for Conversational Agents
M Morisio, M Mensio – 2018 – webthesis.biblio.polito.it
… The spreading of these agents, also called bots or chatbots, has highlighted an important need: going beyond the simple (often pre-computed) answer and provide personalized answers according to users’ profiles … 12 2 State of the Art 13 2.1 Chatbots and their classification …
Robotica: Speech Rights and Artificial Intelligence
RKL Collins, DM Skover – 2018 – books.google.com
Page 1. Collins & Skover Rob otic a Speech Rights & Artificial Intelligence Page 2. Page 3. robotica In every era of communications technology – whether print, radio, television, or Internet – some form of government censorship follows to regulate the medium and its messages …
Online and Automated Dispute Resolution in New Zealand: A Law Reform and Regulation Perspective
C Austin – Victoria University of Wellington Legal Research Paper …, 2018 – papers.ssrn.com
Page 1. Electronic copy available at: https://ssrn.com/abstract=3154976 Victoria University of Wellington Legal Research Papers Student and Alumni Paper Series Editor, Professor John Prebble QC Editor, M?mari Stephens Assistant Editor, Gerald Alloway …
a priori reasoning, 374, 391 Aboriginal people of Australia, 341 absolute monarchy, 83 abstraction, 377
A Ghraib, R Alfonsin, A Altdorfer, G Anders, B Anderson… – praxis – cambridge.org
… 366 change, 332–338 charity, 161 Charles I of England, 259, 387, 418 chatbots, 460 Chayes … 66–67, 453 Mexican nationalism, 138 Mexico, 141 Michelman, Frank, 43 Microsoft, 460 Milgram … Tadi? case, 267 Tamir, Yael, 91 Tarde, Gabriel, 332 Tay, 460 Taylor, Charles, 35–36 …
Brave new world: service robots in the frontline
J Wirtz, PG Patterson, WH Kunz… – Journal of Service …, 2018 – emeraldinsight.com
… from business practitioners (Lelieveld and Wolswinkel, 2017; Manyika et al., 2017; Microsoft, 2018) and … ATM interface) • Voice-based (eg voice- based chatbots, Siri, Alexa) • Text-based (eg chatbots) … users were uncertain whether they interacted with a human or chatbot, and 18 …
Preliminary Report #2 From Batch 5 Of The IOT Standards Development Project
V de Montréal – 2018 – ville.montreal.qc.ca
Page 1. PRELIMINARY REPORT #2 FROM BATCH 5 OF THE IOT STANDARDS DEVELOPMENT PROJECT CONTRIBUTIONS TO A CONCEPTUAL FRAMEWORK FOR MANAGING THE SOCIAL AND ETHICAL ISSUES OF URBAN IOT FEBRUARY 2018 Prepared for …
Yesterday’s tomorrow today: Turing, Searle and the contested significance of Artificial Intelligence
J Morgan – Realist Responses to Post-Human Society: Ex Machina, 2018 – researchgate.net
… years. The first report was produced by a selected Study Panel, mainly comprised of experts in robotics, programming, data analysis, systems theory and planning, and economics (drawn from Microsoft, MIT, Harvard etc). The …
The economics of Machine Learning: a microeconomic model of customer-firm interaction
C CAMBINI, MP BORTONE – 2018 – webthesis.biblio.polito.it
… For instance, the well-known Virtual Personal Assistants (VPA) Siri (by Apple) or Cortana (by Microsoft) use speech recognition and natural-language processing algorithms to convert the user’s input in data that the software can understand; then the information is processed on …
Hierarchical imitation and reinforcement learning
HM Le, N Jiang, A Agarwal, M Dudík, Y Yue… – arXiv preprint arXiv …, 2018 – arxiv.org
… 2An important real-world application is in goal-oriented di- alogue systems. For instance, a chatbot assisting a user with reservation and booking for flights and hotels (Peng et al., 2017; El Asri et al., 2017) needs to navigate through multiple turns of conversation …
Yesterday’s tomorrow today: Turing, Searle and the contested signi?cance of arti?cial intelligence
J Morgan – Realist Responses to Post-Human Society: Ex …, 2018 – content.taylorfrancis.com
… years. The first report was produced by a selected Study Panel, mainly comprised of experts in robotics, programming, data analysis, systems theory and planning, and eco- nomics (drawn from Microsoft, MIT, Harvard etc.). The …
5 Yesterday’s tomorrow today
J Morgan – Realist Responses to Post-Human Society: Ex Machina – researchgate.net
… years. The first report was produced by a selected Study Panel, mainly comprised of experts in robotics, programming, data analysis, systems theory and planning, and eco- nomics (drawn from Microsoft, MIT, Harvard etc.). The …
Mental time-travel, semantic flexibility, and AI ethics
M Arvan – AI & SOCIETY, 2018 – Springer
Page 1. Vol.:(0123456789) 1 3 AI & SOCIETY https://doi.org/10.1007/s00146-018- 0848-2 OPEN FORUM Mental time?travel, semantic flexibility, and AI ethics Marcus Arvan1 Received: 7 March 2018 / Accepted: 9 April 2018 …
PhotoshopQuiA: A corpus of non-factoid questions and answers for why-question answering
A Dulceanu, T Le Dinh, W Chang, T Bui… – Proceedings of the …, 2018 – aclweb.org
… significant investments of large tech companies in building personal assistants (eg, Microsoft Cortana, Apple … technology needed for smart services such as recommenda- tion systems, chatbots, and intelligent … Bollacker, K., Evans, C., Paritosh, P., Sturge, T., and Tay- lor, J. (2008 …
The Heart’s Content: Media and Marketing after the Attention Economy
R Hunt – 2018 – spectrum.library.concordia.ca
Page 1. The Heart’s Content: Media and Marketing after the Attention Economy Robert Hunt A Thesis in The Department of Communication Studies Presented in Partial Fulfillment of the Requirements for the Degree of Master of Arts (Media Studies) at Concordia University …
The Roles Of Artificial Intelligence And Humans In Decision Making: Towards Augmented Humans?: A focus on knowledge-intensive firms.
M Claudé, D Combe – 2018 – diva-portal.org
… Thus, KIFs in the tech industry such as the American Google, Amazon, Facebook, Apple, and Microsoft (GAFAM’s), or the Chinese Baidu, Alibaba … For example; famous banks such as Orange Bank or the alternative banking app Revolut use chatbots, AI wrote articles for the …
What the Digital Future Holds: 20 Groundbreaking Essays on How Technology Is Reshaping the Practice of Management
MIT Sloan Management Review – 2018 – books.google.com
Page 1. sz_ -º – a-ººº- – *- -? |\\ What the Digital Future Holds 20 Groundbreaking Essays on How Technology ls Reshaping the Practice of Management MTSloan Management Review Page 2. What the Digital Future Holds Page 3 …
User Experience Over Time With Conversational Agents: Case Study Of Woebot On Supporting Subjective Well-Being
HM DEM?RC? – 2018 – etd.lib.metu.edu.tr
Page 1. USER EXPERIENCE OVER TIME WITH CONVERSATIONAL AGENTS: CASE STUDY OF WOEBOT ON SUPPORTING SUBJECTIVE WELL-BEING A THESIS SUBMITTED TO THE GRADUATE SCHOOL OF NATURAL …
A Study of Methods in Computational Psychophysiology for Incorporating Implicit Affective Feedback in Intelligent Environments
DP Saha – 2018 – vtechworks.lib.vt.edu
Page 1. A Study of Methods in Computational Psychophysiology for Incorporating Implicit Affective Feedback in Intelligent Environments Deba Pratim Saha Doctoral Dissertation submitted to the Faculty of the Virginia Polytechnic …
Towards Literate Artificial Intelligence
M Sachan, CMU EDU – pdfs.semanticscholar.org
… Most of those interlocutors will be humans; one will be a chatbot, cre- ated for the … A number of chat bots have been developed which have come close to passing the Tur- ing … Trischler et al., 2016) from Maluuba Research, and MS MARCO (Nguyen et al., 2016a) from Microsoft …
Schedule Highlights
C Learning, C Learning, M Learning – 2018 – pdfs.semanticscholar.org
… Wordplay: Reinforcement and Language Learning in Text-based Games Trischler, Lazaridou, Bisk, Tay, Kushman, Côté, Sordoni, Ricks, Zahavy, Daumé III Machine Learning for the Developing World (ML4D): Achieving sustainable impact De-Arteaga, Herlands, Coston …
Smart Wonder: Cute, Helpful, Secure Domestic Social Robots
D Dereshev – 2018 – nrl.northumbria.ac.uk
… 6 [277], and Amazon Echo – a smart speaker [9], were first introduced [37,213]. Numerous challengers in the field have appeared since: Google [99], Apple [12], Microsoft [208], Baidu [157], and Yandex [325] (among others) compete on the intelligent personal assistant (IPA) …