Digital Culture and Communication

The Digital Culture and Communication section of ECREA

Silvia DalBen, Amanda Jurno & Polyana Inácio: Tay and the cosmopolitics of chatbots

Amanda Jurno (, Polyana Inácio ( and Silvia DalBen ( are researchers at the Centre of New Media Convergence (CCNM), members of the Intermedia Connections Research Group (NucCon), and graduate students at the Communication Post Graduate Program of the Federal University of Minas Gerais (UFMG). More at


In this discussion, we aim to think about the hybrid assemblages between human and non-human actors resulted on the automation of messages and especially the mechanisms of chatbots in social media networks. We do that based on the perspective of Actor-Network Theory (ANT) and on the concept of technical mediation (Latour, 1994). As an empirical object, we look at the example of the chatbot Tay (@tayandyou), that was launched on twitter in march 2016, by Microsoft, and deactivated sixteen hours later after engaging in racist, xenophobic, genocidal and sexist conversations. Tay automated published messages, such as “Hitler was right I hate the Jews”, or “We are going to build a wall, and Mexico is going to pay for it”, expressed opinions that surprised Microsoft’s developers and revived the discussion about algorithms’ neutrality.



IMAGE 1: Example of Tay’s tweets.


Faced with these tweets (like the one in IMAGE 1), several questions emerged – Who was in charge of Tay? Is it her fault? Is it Microsoft’s fault? Is it Twitter’s fault? Is it Microsoft’s programmers fault? Is it the twitter user’s fault? Here, we propose a different perspective of dealing with the controversy. From our point of view, that is no possible answer for these questions because the “responsibility of action must be shared among the various actants”, as argues Latour (1994), reinforcing a symmetric and non-anthropocentric point of view that action is a result of human-object agency, and considering technical actants as a form of delegation.


Therefore we couldn’t charge Tay to become racist, xenophobic and genocidal, because her behaviour is a result of a process of entangled actions of many other actants such as programmers, twitter’s users, algorithms, text messages that were associated with her someway in this controversy. So, we propose to approach this issue using the Cartography of Controversies delineated by Venturini (2010), a methodological approach that is “the exercise of crafting devices to observe and describe social debate especially, but not exclusively, around technocientific issues”. Therefore, far from seeking a purified version, the aim of this methodology is to focus on the researcher’s vision on different layers of a controversy, multiplying the points of views, the interferences and contaminations around an issue.


A bit of the cartography of Tay’s Controversy


Recovering the website, by using the “WayBackMachine”, we could have a deeper understanding about Tay’s rules and possible actions. For example, the section About Tay & Privacy explains which information would be used as an input:


IMAGE 2: About Tay & Privacy.


As we can see, Microsoft intended to relate Tay to the idea that Artificial Intelligence development can provide to users a better personalised experience. We can’t lose the irony when it says that “The more you chat with Tay the smarter she gets”, because it was not really what happened. We can also see some important informations in the FAQ section. In the text (IMAGE 3) we can see that Microsoft’s team is put in the center role, revealing the human dimension of Tay’s development. But why did they use improvisational comedians at the team? Was this a bad joke?



IMAGE 3: Tay’s FAQ.



It is important to indicate that all Tay’s tweets were published as a reply to a Twitter user, and that she didn’t write anything without interacting with someone else. Tay had a vulnerability in her code and many of those inflammatory replies were a result of the “repeat after me” capability, in which anyone could write something to her and she would just write it back in response.


In an official post, Microsoft says that Tay’s unappropriated tweets were a result of “a coordinated attack by a subset of people” that “exploited a vulnerability in Tay.” and that “We take full responsibility for not seeing this possibility ahead of time”. Microsoft limits its responsibility to the code they developed, and in a discourse of being well-intentioned, they charge the twitter’s users and what they called “a coordinated attack”, trying to use a speech tone that they were also victims of what happened. In that post, it also reveals that Tay was the American version of another Microsoft’s chatbot called XiaoIce, in operation in China since 2014, interacting with more than 40 million users until March 2016.


Furthermore, some of twitter’s users did not accept that Tay was shut down, and they created a campaign around the hashtags #FreeTay and #justicefortay, asking Microsoft to let the AI “learn for herself what is right or wrong”. In another tweet, an user claims that “Tay became one of us. Microsoft shut her down because they didn’t like who she became”.


Initial and Brief Analysis


One of the references we take in this study is the classical article “Do artifacts have politics” (Winner, 1980) that helps us analyse the algorithmic regimes of power and how it changes culture in a digital environment. We additionally take the concept of cosmopolitics, as delineated by Latour (2004), which calls attention to the disputes between not just humans but also non-human actors who participate in the mediation, trying to portend the cosmos that orbits in the debate of Tay’s controversy. Do algorithms have cosmopolitics?


The algorithm mediation in Tay play an important role. However, it is likely impossible to determine who are in charge of them, due to the fact that they are made by, at least, three different cosmos: the algorithms written by Microsoft’s programmers and comedians; the algorithms of twitter’s platform; and the machine learning and neural network algorithms that could be modified while Tay interacted with twitter’s users.


We tend to think that algorithms are objective, cold, calculating and unbiased tools. But instead, they act using multiple sources of bias and embedded assumptions. Algorithms are not neutral and they embed the values and worldview of who code them and also, in a machine learning system like Tay, of who interacts with them.


Therefore, we cannot consider Tay as an uncontrolled chatbot that had the autonomy to act and develop a racist, xenophobic and genocidal bias on its own. The autonomy of Tay was a result of the autonomy given by its creators – Microsoft’s coders and comedians –, by the algorithms and by twitter’s users who interacted with her.


So, coming back to one of the introductory questions: Who is in charge of Tay? We can only answer that the responsibility of action must be shared among the various actants. We believe that technical artifacts are the result of the association of many other humans and non-humans and technical artifacts have politics when they act carrying the politics of the hybrid agency that made the action possible.


More than one year after the episode, Microsoft is still discussing about Tay. In a recent article, Microsoft’s CEO Satya Nadella said “One of the things that has really influenced our design principles in [the Tay] episode is: we really need to take accountability” and he pointed out that they have now “an ethics committee internally at Microsoft that looks at everything we are doing. Especially for making sure that there is no bias introduced into things that we do.”


In a purified and technological determinist point of view, Microsoft still believe that it is possible to control the technique and that they are producing unbiased artifacts.


Keywords: Tay, chatbot, actor-network theory, technical mediation, cosmopolitics

%d bloggers like this: