Editorial responsibilities arising from personalization algorithms

 In Concepts, Discussion, Issue 1, Volume One
, , , , , and

Introduction

In the online (social) media market the limited capacity of human attention is perceived as the primary resource bottleneck. In response to this, news feeds, search engines and content recommendation systems use increasingly sophisticated and personalized algorithms to cut through the mountains of available information in the hope of providing content that is sufficiently relevant to keep the users on the platform. Superficially, there seems nothing wrong with prioritizing information that users will likely agree with; after all, people tend to self-select information that aligns with their own beliefs anyway (Lawrence, Sides, & Farrell, 2010). However, the implementation, and sometimes the very existence, of these personalization algorithms is often hidden from users with potentially negative consequences for their personal agency over their internet experience. Rather than ask users to explicitly define their interest to the algorithms, the algorithms usually identify personalized interest patterns based on assumptions about user behaviour, such as an assumption that browsing behaviour is usually rationally efficient (time spent on a website is assumed to correlate with level of interest) and that people’s interests remain unchanged for prolonged periods of time. Furthermore, personalization algorithms risk amplifying a polarized news climate and potentially limit exposure to attitude-challenging information (Bakshy, Messing, & Adamic, 2015; Bennett & Iyengar, 2008). It has been argued that the ‘filter bubble’ (Adee, 2016) effect could promote intellectual isolation by narrowing our worldview and systematically presenting information we agree with while making information with a different perspective less visible. With internet users aged 16 to 64 in 2014 spending an average of 1.72 hours per day on social network sites (Mander, 2015), these platforms and the private companies that run them have become vital components of the digital public sphere. To quote a 2012 statement by the Council of Europe (Council of Europe, 2012):

1. Social networking services are an important part of a growing number of people’s daily lives. They are a tool for expression and communication between individuals, and also for direct mass communication or mass communication in aggregate. This complexity gives operators of social networking services or platforms a great potential to promote the exercise and enjoyment of human rights and fundamental freedoms, in particular the freedom to express, to create and to exchange content and ideas, and the freedom of assembly. Social networking services can assist the wider public to receive and impart information.

2. The increasingly prominent role of social networking services and other social media services also offer great possibilities for enhancing the potential for the participation of individuals in political, social and cultural life. The Committee of Ministers has acknowledged the public service value of the Internet in that, together with other information and communication technologies (ICTs), it serves to promote the exercise and enjoyment of human rights and fundamental freedoms for all who use it. As part of the public service value of the Internet, these social networking services can facilitate democracy and social cohesion.

In stark contrast to these positive sentiments, social media companies (and search engines) also increasingly find that due to their global reach national governments have effectively ‘privatized’ Human Rights online (Taylor, 2016). Thus they find themselves in the position of having to arbitrate on the balance between ‘public interest’ v. ‘personal privacy’ (e.g. ‘the right to be forgotten’) or the rights to ‘freedom of expression’ v. ‘protection from harm’ (e.g. hate speech).

With these considerations in mind, we argue that social media companies have a corporate social responsibility to promote a healthy democratic discourse by adopting a code of editorial-like responsibility, including concepts such as the public interest in their content optimization algorithms. Fundamentally this involves applying principles of Responsible Research and Innovation to the design, development and appropriation of technologies.

In this paper we examine the question of editorial responsibility on social media platforms in light of content recommendations generated by personalization algorithms. Specifically, we explore the position that is frequently taken by social media platforms that they are not media companies because they do not create content, but are technology companies that merely produce tools. This distinction may appear a pedantic argument over definitions but in practice it has the consequence of conveying legal protection to platforms against liability for hosting third party content (Manila Principles, 2017, 2017). The remainder of this paper is organized as follows: 1. An overview of editorial responsibility as currently applied to traditional and social media, with particular focus on the approach to the concept of public interest; 2. Reasoning and case studies regarding the classification of social media companies as technology, instead of media, companies; 3. A Responsible Research and Innovation based recommendation for a Responsible Editorial Approach to the use of personalization algorithms; leading to a summarizing conclusion.

Editorial responsibility as policy framework

Editorial responsibility refers to the code of conduct which describes the responsibilities of publishers, editors and journalists towards the public. A collection of the codes of journalism ethics in Europe is available at EthicNet (2017). The code includes basic fundamentals such as the care that must be taken to “avoid publishing inaccurate, misleading or distorted information, including pictures”. Other subtler elements are also described such as the requirement that “in cases involving personal grief or shock, enquiries and approaches must be made with sympathy and discretion and publication handled sensitively” (examples were taken from the UK “Editor’s Code of Practice”). Similar codes in the US are maintained by the American Society of Newspaper Editors (2017), the Society of Professional Journalists (2017) and the Radio and Television News Directors Association (RTDNA, 2015).

A central guiding principle in journalism is an ethical obligation to serve the public interest (P. Napoli, 2010). This traditional approach to public interest is based on trusteeship, where policymakers and media organizations apply normative principles of social responsibility (Siebert, Peterson, & Wilbur, 1963). By contrast social media platforms exhibit a model of public interest that is much closer to a marketplace approach. Under this approach public interest is primarily determined by consumer demand as measured by the content provider, relying on market forces (Fowler & Brenner, 1982). The ‘terms of service’ of social media platforms typically contain wording along the lines of:

You are responsible for your use of the Services and for any Content you provide, including compliance with applicable laws, rules, and regulations. You should only provide Content that you are comfortable sharing with others.

Any use or reliance on any Content or materials posted via the Services or obtained by you through the Services is at your own risk. We do not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Content or communications posted via the Services or endorse any opinions expressed via the Services. (Twitter, 2016b)

This version of marketplace public interest relegates the platform to the role of enabling environment whilst users take responsibility for the production, dissemination and consumption of (news) content in exchange for the autonomy they are given on the platform (P. M. Napoli, 2015). This is in part a reflection of the nature of the platforms but is primarily an institutional design choice, as illustrated by companies’ mission statements:

Facebook’s mission is to give people the power to share and make the world more open and connected.(Facebook, 2017)

Our mission: To give everyone the power to create and share ideas and information instantly, without barriers. (Twitter, 2016a)

The extent to which these mission statements and attitudes to public interest is symptomatic of the ‘culture’ within which these companies operate is shown by one app designer quote reported by Ananny and Crawford (2014):

“I don’t think that the people in this space…are familiar with these ideas of journalism…I don’t think they believe they’re important. I think there are no ideals being pursued”.

The approach to public interest, company mission statements and the general culture around ideas/responsibilities of journalism, as mentioned above, all feed into and flow out of the overarching position taken by the social media companies, which was succinctly summarised by Mark Zuckerberg as: “We are a tech company, not a media company.” (Segreti, 2016)

Technology company or media company?

In 2012 the Reuters Institute for the Study of Journalism analysed the role that digital intermediaries (Foster, 2012) play in enabling users to access news sources – intermediaries such as search engines, social networks and app stores. The Reuters Institute found that digital intermediaries act as gatekeepers who exert editorial-like judgements to varying degrees as they “sort and select content to provide news which is of ‘relevance’ to their customers, and decide which sources of news to feature prominently.” [page 6]. Thereby they do affect the nature and range of news content that users have access to, hence, “… they do perform important roles in selecting and channelling information, which implies a legitimate public interest in what they do.” [page 30]. When countering suggestions that they could be categorised as media corporations, with accompanying editorial responsibility, social media platforms tend to focus on two main arguments.

The first is that their algorithms only provide recommendations or adjust the ranking/visual prominence of content. Thus they do not generate, remove or alter content, just reshape the manner in which it is presented to the user (for the purposes of this paper we focus only on the role of personalization algorithms, not the removal of ‘inappropriate’ content that violates the Terms of Service e.g. copyright infringement). This is an argument from the ‘gods-eye perspective’ of the platform as a whole. The view from the ground, as experienced by platform users, is one where upgrading or downgrading the visibility of content may have a substantial impact on the reach of the content beyond the original contributor. Content ranking manipulates the chances of people becoming aware of content and subsequently the chances of that content spreading through the various layers of the social network (Hodas & Lerman, 2013). Apparent evidence for the impact of platform design choices on content dissemination was reported in an article by TechCrunch (Constine, 2012) which correlated changes in the Facebook news feed presentation with wide fluctuations in Facebook traffic to news providers such as the Guardian. A further example of editorial-like influence was provided by Tufekci (Zeyep, 2015) who traced social media traffic related to the 2014 Ferguson protests. Tufecki noted that in the early phases of the protests (before they became headline news) reports of events in Ferguson were spreading like wildfire across the unfiltered Twitter feed but had hardly registered on Facebook, even among people who had showed interest in them on their Twitter accounts. Based on observational evidence the Facebook algorithms apparently judged the Ferguson story as being of low “relevance” to Facebook users, opting instead to populate the News Feed with posts about the “ice bucket challenge”.

The second argument put forward by social media companies is that the algorithm is merely a tool that performs a task based on the preferences of the platform user, as derived from the user’s data. If an algorithm makes decisions based on user derived data, who is responsible for the outcome? The user who (unwittingly) provides the raw data? Or the creator of the algorithm who defined the relevant variables, set the system parameters and designed the way in which the algorithm translates the raw input data into actions that affect the information flow to the user (which in turn affects the data the user will end up feeding back into the algorithm)? The argument that the individual tailoring of algorithms makes them mere tools for furthering the choices made by the user is further undermined by the lack of transparency to inform the user about the criteria that are used for defining “relevance” of content. In the absence of transparency or any meaningful control levers by which the user could guide the behaviour of the algorithm, algorithmic accountability lies primarily with the platform. The fact that the platforms can, and do, subtly guide algorithm behaviours is clearly illustrated by experiments, such as Facebook’s “emotional contagion” study (Kramer, Guillory, & Hancock, 2014) in which the news feed algorithm was tweaked in order to selectively promote the visibility of content expressing positive (and negative) moods.

Online personalization mechanisms are designed to sift through data in order to supply users with content that is apparently most personally relevant and appealing to them. These algorithm driven mechanisms curate and shape much of our browsing experience – for instance the results of a Google search may change depending on past searches made on a particular machine or with a specific user account; the content and order of items on a personal Facebook newsfeed are shaped by what Facebook’s algorithms have calculated is most interesting to the account owner and Amazon shows products the user might like based on past purchases and searches on the platform.

As already noted, this personalization can be seen as helpful to online users as it avoids them having to sort through the vast amounts of content that are available online and instead directs them towards what they might find most useful, agreeable or interesting (Hodkin, 2014). It also brings many advantages to internet companies as it can increase user numbers and drive up purchasing and/or advertising revenues (Pariser, 2011). However, recent years and public debates have seen concerns arise over the ‘gatekeeping’ role played by personalization algorithms. These concerns are exacerbated by the opaque nature of most personalization algorithms and the lack of regulation around them (Mittelstadt & Sutcliffe, 2016). These concerns can be summarized as falling into the following areas:

  1. The creation of online echo chambers or filter bubbles. On a social network such as Facebook personalization algorithms ensure that users are more likely to see content similar to what they have previously ‘like’ or commented on. This can mean that they repeatedly see content that reaffirms their existing views and they are not exposed to anything that might challenge their own thinking (Singer, 2011).Echo chambers create a homogeneity of content that does not reflect the offline world and their potentially detrimental effects on democratic debate and voting patterns has been much discussed (Thwaite, 2016). The 2016 US presidential election inflamed these discussions further through added concerns about the ways that echo chambers might have enabled and accelerated the spread of ‘fake’ news (Hooton, 2016).
  2. The results of personalization algorithms may be inaccurate and even discriminatory. Despite the sophisticated calculations underpinning them, the algorithms that recommend or advertise a purchase to users or present users with content they might want to see, might not in fact reflect the user’s own interests. This can be an annoyance or distraction. More seriously, algorithms might alternatively curate content for different users in ways that can be perceived as discriminatory against particular social groups (Miller, 2015). For instance researchers at Carnegie Mellon University (Spice, 2015) ran experimental online searches with various simulated user profiles and found that significantly fewer female users than males were shown advertisements promising them help getting high paid jobs. A member of the research team commented “Many important decisions about the ads we see are being made by online systems. Oversight of these ‘black boxes’ is necessary to make sure they don’t compromise our values (Ernst & Young, 2017)”
  3. Personalization algorithms function to collate and act on information collected about the online user. Many users may feel uncomfortable about this, for instance feeling that it constitutes a breach of their privacy (Ernst & Young, 2017). The impact of this perception can be seen in the emergence of options to opt out of personalization advertisements on platforms such as Google (2017) and the growth of platforms that claim not to track their users (DuckDuckGo, 2017).

Responsible Editorial Approach

As illustrated in the previous section, the use of personalization algorithms has arisen as a central societal and political concern and been the basis of a number of recent controversies. The growing existence of these widely publicized concerns and controversies illustrates that when technologies are embedded in the wild, they do not operate in a vacuum. Instead they are appropriated into existing societal and political systems and often have more serious and disruptive ethical implications beyond their intended scope of use. Indeed, the personalization algorithms which are often depicted as just tools according to the narrative of the social media companies that produce them, may in fact have, and indeed in some instances are already having, a transformative impact on society. Such serious and often complex implications are an outcome of what on the surface may be seen as the seemingly straightforward and harmless functionality of these algorithms: just the filtering of information so that there is the provision of information that is deemed relevant to the user, on the bases of simplistic criteria such a click counts or viewing time, which were chosen primarily for their technological convenience.

The field of Responsible Research and Innovation (RRI) emerged from concerns surrounding the increasingly potent and transformative potential of research and innovation (Jirotka, Grimpe, Stahl, Eden, & Hartswood, 2017), and the societal and ethical implications that these may engender (Sutcliffe & Director, 2011; Von Schomberg, 2013). A responsible approach to the design, development and appropriation of technologies through the lens of RRI, entails a multi-stakeholder involvement through the processes and outcomes of research and innovation. This inclusive approach is seen as advantageous and important given the increasingly broad reach and impact of technologies beyond their primary intended functionality and direct user base (Eden, Jirotka, & Stahl, 2013; Grimpe, Hartswood, & Jirotka, 2014). It is seen that the mutual learning that stakeholders and developers of technology may benefit from in such a process, can help developers to be responsive to existing societal and ethical concerns surrounding a technology (Owen, Macnaghten, & Stilgoe, 2012). Moreover, this approach may also aid in anticipating, and thus mitigating, further ethical issues that may arise through ongoing technological use and development. Importantly, even beyond this, it is seen that such an approach can provide a creative space that may be beneficial in actively shaping and steering innovation so that it may be aligned to finding solutions to societal needs and challenges (such as sustainability etc.).

It is important that a responsible approach informed by an RRI perspective is applied to the development and use of personalization algorithms. In essence, this approach asks the social media platforms involved in the design, development and use of such algorithms to interact with wider stakeholders in order to elicit their concerns and issues surrounding the filtering and personalization of information. For example, such concerns may regard how algorithms are developed in the first place and what values are – consciously or unconsciously – embedded within them. Or, we may consider the robustness of assumptions underpinning algorithms over what constitutes relevant news for a user and what these assumptions mean for the usefulness of such algorithms. Importantly, a responsible approach provides an ongoing multi-stakeholder space where matters such as how algorithms are produced and used are discussed, and the implications that such filtering algorithms have on individual users and society in a broader sense can be surfaced.

The development of mutual learning and grounded understanding can shape relevant governance/editorial solutions to minimize the negative societal ramifications of personalization algorithms. In the notation of responsibility that we align to, what is of utmost importance here is that a responsible editorial approach should be taken as a shared and collective multi-stakeholder responsibility. Given the interrelationships between social media platforms in their development of algorithms and stakeholders in their interaction with algorithms, plus the multi-level societal and ethical issues that these algorithms are generating, it seems extremely important that social media companies do not just absolve themselves of any responsibility in this area.

Conclusion

Based on the reasoned analysis and case studies presented in the previous sections and in combination with the adoption of an RRI approach we conclude the following: the introduction of personalization algorithms as a means of convenience for users has resulted in a condition where social media platforms are no-longer neural in relation to the content they are hosting. Even if the ultimate behaviour of personalization algorithms depends on user data to the extent that the engineers who created the algorithm could not anticipate its outcomes, the lack of transparency towards users means that algorithm design choices are affecting the users’ news and information exposure in ways that are beyond their ability to control.

We further conclude that in keeping with the ACM Principles for Algorithmic Transparency and Accountability (ACM, 2017), the IEEE Vision for Prioritizing Human Wellbeing with Artificial Intelligence and Autonomous System (IEEE, 2017) and the 2012 recommendation of the Council of Europe on the protection of human rights with regard to social networking services (Council of Europe, 2015), social media platforms should be accountable for the editorial-like control exerted by their “personalization” algorithms on the content visibility experienced by users. We therefore recommend the adoption of a Responsible Editorial Approach in the design, implementation and use of content personalization algorithms.

Acknowledgement

This work forms part of the UnBias project supported by EPSRC grant EP/N02785X/1.

UnBias is an interdisciplinary research project led by the University of Nottingham in collaboration with the Universities of Oxford and Edinburgh. For more information about the UnBias project, see http://unbias.wp.horizon.ac.uk/

Reference

 

ACM. (2017). Statement on Algorithmic Transparency and Accountability. Retrieved January 31, 2017, from https://www.acm.org/binaries/content/assets/public-policy/2017_usacm_statement_algorithms.pdf

Adee, S. (2016). Burst the filter bubble. New Scientist, 232(3101), 24–25. https://doi.org/10.1016/S0262-4079(16)32182-0

Ananny, M., & Crawford, K. (2014). A Liminal Press: Situating news app designers within a field of networked news production. Digital Journalism, 3(2), 192–208.

ANSE. (2017). ASNE Statement of Principles. Retrieved January 30, 2017, from http://asne.org/content.asp?pl=24&sl=171&contentid=171

Bakshy, E., Messing, S., & Adamic, L. A. (2015). Exposure to ideologically diverse news and opinion on Facebook. Science, 348(6239), 1130–1132.

Constine, J. (2012). Decline of Reader Apps Likely Due to News Feed Changes, Shows Facebook Controls the Traffic Faucet. Retrieved January 26, 2017, from http://social.techcrunch.com/2012/05/07/decline-of-facebook-news-readers/

Council of Europe. (2012). Recommendation CM/Rec(2012)4 of the Committee of Ministers to member States on the protection of human rights with regard to social networking services. Retrieved January 31, 2017, from https://search.coe.int/cm/Pages/result_details.aspx?ObjectID=09000016805caa9b

Council of Europe. (2015). Recommendations and declarations of the Committee of Ministers of the Council of Europe in the field of media and information society. Strasbourg: Media and Internet Division Directorate General of Human Rights and Rule of Law.

DuckDuckGo. (2017). DuckDuckGo. Retrieved January 31, 2017, from https://duckduckgo.com/

Eden, G., Jirotka, M., & Stahl, B. (2013). Responsible research and innovation: Critical reflection into the potential social consequences of ICT (pp. 1–12). IEEE.

Ernst & Young. (2017). The Data Revolt: EY survey reveals consumers are not willing to share data. Retrieved January 31, 2017, from http://www.ey.com/UK/en/Services/Specialty-Services/The-Data-Revolt—EY-survey-reveals-consumers-are-not-willing-to-share-data

Facebook. (2017). Facebook Mission. Retrieved January 31, 2017, from https://www.facebook.com/facebook/info

Foster, R. (2012). News plurality in a digital world. Oxford: Reuters Institute for the Study of Journalism.

Fowler, M. S., & Brenner, D. I. (1982). A marketplace approach to broadcast regulation. Texas Law Review, (60), 1–51.

Google. (2017). Opt out of seeing personalised ads – Ads Help. Retrieved March 3, 2017, from https://support.google.com/ads/answer/2662922?hl=en-GB

Hodas, N. O., & Lerman, K. (2013). Attention and visibility in an information-rich world (pp. 1–6). IEEE Institute of Electrical and Electronics Engineers.

Hodkin, S. S. (2014). The Internet of Me: Creating a Personalized Web Experience. Retrieved January 31, 2017, from https://www.wired.com/insights/2014/11/the-internet-of-me/

Hooton, C. (2016). Social media echo chambers gifted Donald Trump the presidency. Retrieved January 31, 2017, from http://www.independent.co.uk/voices/donald-trump-president-social-media-echo-chamber-hypernormalisation-adam-curtis-protests-blame-a7409481.html

IEEE. (2017). Ethically Aligned Design: A Vision for Prioritizing Human Wellbeing with Artificial Intelligence and Autonomous Systems. University of Texas at Austin: IEEE Global Initiative. Retrieved from http://standards.ieee.org/develop/indconn/ec/ead_v1.pdf

Jirotka, M., Grimpe, B., Stahl, B., Eden, G., & Hartswood, M. (2017). Responsible research and innovation in the digital age. Communications of the ACM, 60(5), 62–68. https://doi.org/10.1145/3064940

Kramer, A. D. I., Guillory, J. E., & Hancock, J. T. (2014). Editorial Expression of Concern: Experimental evidence of massivescale emotional contagion through social networks. Proceedings of the National Academy of Sciences, 111(29), 10779–10779.

Lawrence, E., Sides, J., & Farrell, H. (2010). Self-Segregation or Deliberation? Blog Readership, Participation, and Polarization in American Politics. Perspectives on Politics, 8(1), 141–157. https://doi.org/10.1017/S1537592709992714

Mander, J. (2015, January 26). Social Media Users Now Spend 1.72 Hours Networking Per Day. Retrieved January 26, 2017, from http://blog.globalwebindex.net/chart-of-the-day/daily-time-spent-on-social-networks-rises-to-1-72-hours/

Manila Principles. (2017). Manila Principle on Intermediary Liability. Retrieved January 30, 2017, from https://www.manilaprinciples.org/

Miller, C. C. (2015, July 9). When Algorithms Discriminate. The New York Times. Retrieved from https://www.nytimes.com/2015/07/10/upshot/when-algorithms-discriminate.html

Mittelstadt, B. D., & Sutcliffe, D. (2016, December 7). Should there be a better accounting of the algorithms that choose our news for us? Retrieved January 31, 2017, from http://blogs.oii.ox.ac.uk/policy/should-there-be-a-better-accounting-of-the-algorithms-that-choose-our-news-for-us/

Napoli, P. (2010). Audience Evolution: New Technologies and the Transformation of Media Audiences. Columbia University Press.

Owen, R., Macnaghten, P., & Stilgoe, J. (2012). Responsible research and innovation: From science in society to science for society, with society. Science and Public Policy, 39(6), 751–760. https://doi.org/10.1093/scipol/scs093

Pariser, E. (2011). When the Internet Thinks It Knows You. The New York Times. Retrieved from http://www.nytimes.com/2011/05/23/opinion/23pariser.html

Reference

RTDNA. (2015, June 11). Radio and Television News Directors Association: Guiding Principles. Retrieved January 30, 2017, from http://rtdna.org/

Siebert, F. S., Peterson, T., & Wilbur, S. (1963). The Social Responsibility Theory. In Four Theories of the Press: The Authoritarian, Libertarian, Social Responsibility, and Soviet Communist Concepts of What the Press Should Be and Do (pp. 73–105). Chicago: University of Illinois Press.

Singer, N. (2011). The Trouble with the Echo Chamber Online. The New York Times. Retrieved from http://www.nytimes.com/2011/05/29/technology/29stream.html

Spice, B. (2015). Questioning the Fairness of Targeting Ads Online: CMU Probes Online Ad Ecosystem. Retrieved January 31, 2017, from https://www.cmu.edu/news/stories/archives/2015/july/online-ads-research.html

SPJ. (2017). SPJ Code of Ethics. Retrieved January 30, 2017, from https://www.spj.org/ethicscode.asp

Sutcliffe, H., & Director, M. (2011). A report on Responsible Research and Innovation. MATTER and the European Commission. Retrieved from http://www.diss.unimi.it/extfiles/unimidire/243201/attachment/a-report-on-responsible-research-innovation.pdf

Taylor, E. (2016). The Privatization of Human Rights: Illusions of Consent, Automation and Neutrality (No. 24). London, United Kingdom: Global Commission on Internet Governance.

Thwaite, A. (2016, December 1). On the problems of echo-chambers and filter bubbles: Why might you worry about echo-chambers? Retrieved January 31, 2017, from https://echochamber.club/problems-echo-chambers/

Twitter. (2016). Twitter Our Mission. Retrieved January 31, 1017, from https://about.twitter.com/company

Twitter. (2016). Twitter Terms of Service. Retrieved January 31, 2017, from https://twitter.com/tos

Von Schomberg, R. (2013). A vision of responsible research and innovation. Responsible Innovation: Managing the Responsible Emergence of Science and Innovation in Society, 51–74.

Wong, C., & Dempsey, J. X. (2011). Mapping Digital Media: The Media and Liability for Content on The Internet (No. 12). London, United Kingdom: Open Society Foundation.

Zeyep, T. (2015). Algorithmic Harms beyond Facebook and Google: Emergent Challenges of Computational Agency. Journal on Telecommunications and High Technology Law, 13, 203–216.

Recommended Posts
Comments
pingbacks / trackbacks
  • […] was attended by Elvira and Menisha and featured the presentation of our paper on “Editorial responsibilities arising from personalization algorithms” as well as the UnBias related paper on “When AI goes to war: youth opinion, fictional […]

Leave a Comment

Contact Us

Please use the form below to send us an e-mail, we will respond to all e-mails as soon as possible.

0

Start typing and press Enter to search