Charles Ess (University of Oslo) has for many years brought a philosophical perspective to media and communication studies and is a prolific commentator on questions of research ethics and medium theory. At our upcoming workshop in Salzburg in November, Charles Ess will give a keynote on “cultural diversity, globalized media and ethical norms”. As part of our interview series with scholars in the field of digital culture and communication, we asked him about his current work in social robotics, digital media ethics and standards in digital culture and communication.
What field of research are you currently most interested in?
Broadly, much of my recent work has focused on bringing especially virtue ethics into further conversation with both normative domains within media and communication studies, beginning with (Internet) research ethics, and efforts within media and communication studies to develop normative approaches, e.g., towards the ethics of digital journalism. This means that I hop back and forth between more philosophical work in these domains (beginning with the work of Shannon Vallor and John Sullins) and work more rooted in media and communications studies (Nick Couldry’s work is a primary focus point here).
Over the past three years or so, much of my attention has been going to social robotics. I’m hoping that other colleagues in media and communication will become interested in social robots, e.g., how social robots can be understood and better designed as devices that communicate with human beings in important ways, including in (artificially) emotive and embodied modes. This is not a new suggestion, as it turns out: a colleague has pointed out that as early as 1985 Robert Cathcart and Gary Gumpert were calling for robots to be studied from these perspectives. Especially as social robots have advanced dramatically in the past decade or so, it is clear that they will become increasingly part of our communicative and media lives – as they have already begun to do in eldercare, some domestic services, and so on.
On the more philosophical side, I have long been interested in how far our various human capacities and abilities can be replicated by computers – from intelligence through emotions to embodied and tacit forms of communication. Where we seem unable to replicate human capacities in computers and machines, this tells us some very important things about what it means to be human. In a recent paper, where I catalog some of the most important “crunch points”, what was interesting to discover was that: while various forms of AI will indeed be “smarter” than human beings on at least a narrow definition of intelligence – AI and robots will likely not be capable of important forms of human ethical judgment. Nor will they be capable of experiencing emotions – though they are getting better all the time at faking emotions. We can likely build sexbots that could be useful and satisfying at a certain level (and in important ways for specific populations): but such sexbots, as lacking real emotion and thereby real desire, will not satisfy a human desire to be fully desired. The concerning part is that especially as robots are increasingly “autonomous” in at least some sense of the word, we are in fact giving over moral agency and choice to these devices. The most famous and scary examples are the “warrior bots” – (semi-) autonomous devices that have the capacity to use lethal force against human beings. There are deeply important ethical, social, and political issues to be debated and resolved here.
Can you name some authors, concepts or theories that you regard as important for researching digital culture?
I continue to be convinced that Medium Theory (or theories) as constituted through the work of Marshall McLuhan, Harold Innis, Elizabeth Eisenstein, Walter Ong, and Neil Postman – and, more recently, Naomi Baron – offers a useful framework for understanding broad eras of communication modalities and their correlations with important philosophical notions. I’ve written extensively on how the shift from literacy-print to a “secondary orality” of electric/electronic culture appear to correlate (as would be predicted) with a fundamental shift in our sense of selfhood. At the same time, more recent work in mediatization theories – especially as brought forward by Andreas Hepp and Friedrich Krotz, Stig Hjarvard, and my colleague Knut Lundby – both challenges and complements these older viewpoints and claims. More broadly, the work of Klaus Bruhn Jensen is a primary reference and guide for matters methodological. At the intersection of methodologies and ethics, Annette Markham has been one of the foremost scholars and theoreticians, along with Elizabeth Buchanan.
All of this further entails attention to especially feminist theorizing, beginning with attention to matters of embodiment. My list goes back to the 1970s, beginning with Sara Ruddick, who is prominent for founding approaches denoted as ethics of care. There were many important voices in the 1990s, especially vis-à-vis the emerging digital cultures facilitated by an expanding Internet, e.g., Sharon Stone, Donna Haraway, Elizabeth Hayles, and many others. As a philosopher, however, I was especially interested in phenomenological accounts of embodiment that stretched back to the work of Maurice Merleau-Ponty (and others): most important for me were the works of Albert Borgmann and then a German philosopher, Barbara Becker, whose accounts of embodiment I found especially helpful in countering some versions of social constructivism. I find recent feminist work on notions of “relational autonomy” to be most helpful in articulating a kind of late modern, post-digital self that is both strongly autonomous (and so capable of sustaining and legitimating democratic norms and processes, including equality and gender equality) and relational (reflecting how far our sense of self is defined by networked relationships of both thick and thin ties). An excellent start here is the anthology by Andrea Veltman and Mark Piper “Autonomy, Oppression, and Gender (download).
I also think that there is a very great risk in using the term “digital” in “digital culture” – at least if it presumes in a kind of 1990s’ dichotomy, that somehow the digital excludes and overcomes the analogue. We are embodied first of all as analogue creatures. Both Sharon Stone and Brian Massumi have helpfully reminded us of the dangers of forgetting the analogue. This point is also made by the Oxford philosopher of information, Luciano Floridi, whose work has been extensively influential in what I call Information and Computing Ethics (ICE). Along these lines, the Onlife Manifesto that resulted from two years’ of collective research and rigorous debate in the EU “Onlife Initiative” project helpfully characterizes how far the contemporary digital era must be understood in important new ways – beginning with moving beyond 1990s’ style dualisms that pitted life offline against life online: hence the neologism “onlife”.
Lastly, for better and for worse, and contra the expectations of many (perhaps most) academics, “religion” has refused to go away in the ways predicted by the so-called secularization thesis. Instead it is in various forms making a come-back – including in its online expressions, practices, etc. In these directions, the work of Heidi Campbell, Mia Lövheim, Knut Lundby, Tim Hutchings, Chris Helland, and many others contributes to an increasingly complex and sophisticated understanding of digital religion as a growing and, in my view, increasingly important domain of Internet Studies more broadly.
What methods, software, or tools do you use in your research?
Obviously, anything I can get my hands on. Broadly, I try to maintain the Aristotelian recognition that theory and praxis are two sides of the same coin. One of the primary reasons why I have delved into media and communication studies is that it was clear to me that too many times, philosophical argument and debate occur in vacuum of empirical data and evidence. At the same time, there is, of course, no meaningful empirical data without a theoretical framework for its acquisition and /or interpretation. I like to think that by bringing philosophical theories to bear on the more empirically-oriented work of media and communication studies, that work is helpfully enhanced and enriched. A particular version of this is my interest in collecting examples of both successful and less than successful research designs that generate and encounter characteristic and sometimes novel research ethics issues. These examples – along with dialogue and discussion at various workshops, PhD colloquia, etc. – help us continue to develop Internet research ethics “from the ground up” – i.e., as fully informed as possible by real-world experience and the ethical intuitions and insights of researchers – while also sustaining strong philosophical analysis and (sometimes) helpful resolutions.
What is the most interesting question at the moment concerning the role of digital technologies in interaction and communication?
In light of my interests and background – I think the most compelling question is how far will digital technologies be able to replicate human capacities for intelligence, communication, and most especially ethical judgment? This resonates with how far are we being “roboticized” and/or algorithmically reshaped in our tastes, judgments, and behaviors through our increasing reliance on digital communication media? My colleague Satomi Sugiyama, has recently published an article on the algorithmization of taste brought about as we increasingly depend on the algorithms of recommender systems. More dramatically, what has been rightly called “the creepy Facebook study” of 2012 (publication, 2014) shows the power of algorithms in social media to shape not only our perceptions but also our moods. I don’t mean to sound alarmist (well, maybe a little): but do we really want Facebook algorithms – as ultimately tuned to making us better targets for advertisers – having such influence and control over our moods and perceptions?
This leads to a last, but perhaps most important question – namely, how well will freedom of expression and related democratic norms and processes survive, much less thrive, in Internet-facilitated communication spaces? While there are some heartening examples of using these spaces to foster democratic deliberation and discourse, there are all too many examples and reasons to worry about Internet-facilitated communication venues as becoming increasingly hostile to democratic norms and values. On the one hand, especially post-Snowden, we are acutely aware of how far digital technologies permit governments an unprecedented level of surveillance and tracking of individuals. On the other hand, we conspire with this surveillance for the sake of convenience and, as Huxley and then Postman warned quite some time ago, for the sake of feeding our near-infinite appetite for distraction. But as we migrate to such corporately-controlled spaces such Facebook and other Internet-facilitated communications – insofar as these spaces become our default public spheres, they are spaces manifestly open to corporate censorship.
Relating to our upcoming workshop in Salzburg (November 2015): What can the study of standards and norms contribute to an analysis of digital culture and communication?
The easy response is: everything! For an ethicist, more or less everything is ethical – meaning, to begin with, everything reflects and embodies specific ethical choices. Everything further entails ethically-laden consequences – beginning with how our interpretation and responses to a given thing shapes our own ethical behavior. And, most apparently, as given things are taken up and used – it entails the question whether they are used along the lines they were designed for, and/or along the lines of their affordances. Moreover, given that ethical values and norms are deeply shaped and refracted by diverse cultural traditions, attending to norms and standards can help foreground how far these reflect a specific culture and norms – and how far they may thereby run the risk of fostering a kind of cultural imperialism. For example, in the early days of the Internet, the American Standard Code for Information Interchange (ASCII) was, not surprisingly, limited to the roman characters of U.S. English. This excluded, in the first instance, the distinctive vowels of Scandinavian languages (å,ø,æ) as well as the German umlauts (ä, ö, ü). Not to mention all non-Roman languages, such as Chinese, Japanese, Arabic, Thai, and so on. More deeply, much was made of the Internet as an “intrinsically” democratic medium – it was thought to be difficult to censor and offer anonymized communication, to intrinsically foster greater freedom of expression, a flattening of hierarchies, and so on. But, as initially documented through the CATaC conferences (Cultural Attitudes towards Technology and Communication), starting in 1998, all of this led to oftentimes profound cultural conflicts.
The good news is that much progress has been made – not only in constructing new standards (starting with UNICODE) that makes online communication in non-U.S. letters, symbols, etc. more straightforward, but also in our awareness of how far technologies, far from being somehow neutral, indeed carry normative weights and values. What this means more broadly: insofar as norms and standards are thus ethically laden – the task is to first become aware of how this is so, including how far the ethical dimensions of a given norm or standard may “provincial,” i.e., primarily relevant or legitimate for a specific culture and tradition. This will inevitably be the case in many instances – but then the next step is to see how far such norms and standards can be rejiggered, how far workarounds might be developed, and so on that would help overcome their provincialism and make them useful (i.e., in a non-imperialistic fashion) on a global scale.