Skip to main content
SearchLoginLogin or Signup

To Search and Protect? Content Moderation and Platform Governance of Explicit Image Material

Child pornography, or child sexual abuse material (CSAM), is often held up as the limit case justifying government surveillance of digital platforms and devices. Under the 2008 Protect Our Children Act, technology companies are mandated to report CSAM to the National Center...

Published onAug 24, 2023
To Search and Protect? Content Moderation and Platform Governance of Explicit Image Material
·

Abstract

Child pornography, or child sexual abuse material (CSAM), is often held up as the limit case justifying government surveillance of digital platforms and devices. Under the 2008 Protect Our Children Act, technology companies are mandated to report CSAM to the National Center for Missing and Exploited Children (NCMEC), the government clearinghouse for such data. However, the means through which companies obtain knowledge of CSAM on their platforms is discretionary and variable, using both human and algorithmic processing power. Pro-privacy groups have raised concerns about the mission creep of image-based content moderation and digital search for its potential violations of the US Constitution’s Fourth Amendment protection of the “right to privacy.” Nonetheless, legal instruments continue to expand the reach of government power to search and seize incriminating data and to hold technology companies further liable for failing to report CSAM content, chipping away at the publishing immunity they currently have under US law. This case study describes the current scope of content moderation practices and introduces arguments for and against expansion of digital search toward removal of explicit or violent image content.  

🎧Listen to an audio version of this case study.

Keywords: content moderation, CSAM, platform governance, privacy, search and seizure 

Mitali Thakor
Black Box Labs, Wesleyan University

Sumaiya Sabnam
Black Box Labs, Wesleyan University

Ransho Ueno
Black Box Labs, Wesleyan University

Ella Zaslow
Black Box Labs, Wesleyan University

Author Disclosure: Research for this project was supported by Black Box Labs at Wesleyan University.

Learning Objectives

  • Describe the history and process of content moderation by technology companies.  

  • Define user-generated content (UGC) and legal protections for such content.  

  • Identify motivations for and against content moderation and “platform governance.” 

  • Debate the ethical responsibilities of technology platforms.  

  • Describe the implications of US First Amendment and Fourth Amendment laws for digital protection and surveillance.  

  • Discuss the needs and considerations required to develop a framework for protecting. user privacy while moderating CSAM for child protection.  

Introduction: Moderating Explicit Content

The proliferation of violent and exploitative images online is a morally pressing issue, seeming to demand a robust political, legal, and technological response. How should technology and media platforms regulate, report, and remove such images? Who is responsible for developing standards for how technology platforms manage published content? How does content moderation help resolve CSAM cases and protect children online? Can government agencies access content that has been published by users in private messages or forums? What risks, if any, does the regulation of digital content pose for the privacy and protection of the average citizen?  

These questions surround the issue of content moderation of child sexual abuse material (CSAM), or as it is colloquially referred, “child pornography.” The reporting and removal of CSAM raises interconnected concerns about the role of government in regulating how technology platforms manage user-generated content (UGC) while also managing to protect users’ digital privacy.  

This case study uses the issue of CSAM moderation as an entry point into debates about content moderation and the ethical responsibility of technology companies and digital media platforms. The case study also introduces arguments made by pro-privacy advocates suggesting that widespread content moderation and governmental regulation may impinge upon digital users’ privacy and publishing freedoms. While the examples provided in this case study focus on the United States, the content moderation of digital images is a global and internetworked issue, and readers are encouraged to consider non-US contexts and case studies as well.  

Content Moderation: Flags, Algorithms, and Humans

In recent years, particularly after the 2016 US election, the Russo-Ukrainian War that began in 2022, and alleged publications of “fake news,” digital platforms have received attention for their practices of content moderation, or the way they filter and censor online discourse. In what Christian Katzenbach terms the “responsibility turn,” both government officials and other public figures have called for technology companies to take more responsibility for the speech or content published on their platforms.1  

Content moderation is the screening and filtering any type of user-generated content (UGC) posted, uploaded, and shared online. Content moderation can be performed by user flags, algorithmic search filters, and teams of human content moderators to determine whether the UGC is appropriate for publication based on legal codes or the rules or policies of the platform upon which the content is published. UGC that may be monitored can include text, audio, images (including photos, illustrations, or diagrams), videos, or livestreamed film. Content moderation may also include protecting user security through scanning of malware or viruses in posted content or attachments.  

Flagging is the process through which users can report images, videos, or text for content moderation review. Mechanisms for flagging can vary by platform, ranging from marking a button to submitting a written report. In theory, flagging should be accessible to all users as a sort of “ubiquitous mechanism of governance” online.2 Some platforms, however, have also initiated “trusted flagger” programs to afford certain users or third parties higher priority in processing of notices or increased access to infrastructures for submitting flags.3

Algorithmic moderation systems use machine learning classifiers trained on large volumes of texts and images to detect and filter potentially violent or problematic content. Algorithmic moderation tools can be used to handle copyright infringement, toxic speech, violent imagery, explicit nudity, and other parameters depending on the rules of the platform. As several recent studies have shown, algorithmic content moderation is not without bias: the norms and rules for algorithmic moderation rely on human raters’ judgments within the training data.4 Algorithmic moderation does not always mean explicit removal of content: in some cases, information may be filtered through practices of “reduction” designed to reduce visibility of the content.5 

Commercial content moderation refers to the labor done by human workers who are explicitly hired to review flagged platform content in an industrial context. There are four primary categories of content moderators: in-house moderators employed at technology companies; “boutique” moderators5 who are hired to manage clients’ online presence; employees at business process outsourcing services; and freelance or “micro-labor”7 task workers who conduct piecework content review.6  

Companies have increasingly turned to algorithmic strategies to moderate content, which either match images to a database of content that is known to violate policies or classify images within set categories of violation. The move toward automated content moderation has been motivated by a desire to absolve humans of the burden of looking at explicit content, but in practice, even algorithms require human decision-making. Content moderation issues are labor issues: from those responsible for the upkeep of algorithms, to users mobilized to report content they come across, to the people who sift through flagged images, moderating content requires an enormous amount of work, and the task expands as digital platforms grow. 

Content Moderation As “Dirty Work?”

Content moderation comes down to the work of case-by-case deliberation. The practice of moderating content requires general policies such as  “no hate speech” to be translated into actionable instructions for those reviewing flagged material. In response to the massive volume of content posted on major social media platforms every minute, tech companies have largely delegated the work of content moderation to underpaid, outsourced laborers. The emotional cost of moderating content is disproportionately borne by people of color working in contingent and precarious contract labor sites in the Global South, especially India and the Philippines (and, to a lesser extent, lower-income communities in North America and Europe). Acting as the first level of content moderation, these workers respond to flagged content by rejecting or confirming the complaint or escalating it to permanent members of content moderation teams at the companies. Workers sift through thousands of images per day, making decisions about content within seconds of looking at it.  

This work is difficult and disposable, with contracts easily closed and quick turnover between moderators; the average length of a content moderator’s contract is three to six months. The work of these moderators goes entirely unseen by users of these platforms and by the public—underground, these workers are subject to viewing and judging some of the most violent images on the Internet. In 2017, two moderators who worked for an agency contracted through Microsoft filed a lawsuit against the company, alleging they suffered PTSD from the viewing of violent images in their work.7 Recent scholarship in digital labor studies has called into question the psychological and emotional costs of monitoring images such as terrorist propaganda, live-streamed executions, and sexual violence. The emotional trauma many of these outsourced workers may face from sorting through thousands of videos in rapid succession has come under scrutiny, as “rote, repetitive, and endless work tasks”8 that suggest something like a modern digital-driven assembly line. Moderators might be seen as doing “dirty work,”9 “data janitorial,”10 or “ghost work,”11 operating in the shadows and behind the scenes, vulnerable to exploitation and abuse themselves as they work to keep the Internet clean.  

At the same time, some have cautioned against this tendency to view content moderators as invisible and victimized, challenging the Westernized perspective that so-called ghost work is invisible: for whom is it invisible and for whom has it always been a reality?12 Indeed, are hierarchies of labor and caste systems of digital work just part and parcel of “tech colonialism” today?13

Nonetheless, with content moderation, this stratified labor economy is often held up as the justification for automating review of images and videos. The argument provided is that no one should have to undergo the trauma of viewing violent images such as CSAM if there is an automated, algorithmic process capable of doing so. Algorithms flag questionable images. Content moderators parse these images at a rapid clip, making quick judgments on spectrums of violence. Elite content reviewers sitting in corporate offices then make the call on whether to report an image or video to investigative authorities. Image content review is subjective, trainable work, a dynamic process between human and computer moderator.14 Content moderators at every level—from outsourced contract worker to corporate reviewer—establish shared perceptive practices to process algorithmic outputs and engage decision-making to escalate a CSAM image into a legal case. Commercial content moderators have been made into labor “surrogates”15 to support the illusion of free-flowing and secure Internet spaces under tech capitalism. It is easy to see why reports of human moderator abuse might circle us back to a push for automation. Many researchers working on image detection algorithms promote a vision for the future in which content moderation is entirely mechanized, and humans do not need to see child abuse images for content review. However, full automation is impossible in CSAM cases, because they involve police investigations and will always require some form of human moderation.16 Humans will always be needed in the processing of CSAM.

Under Section 230 of the United States Communications Decency Act, Internet providers and third-party platforms are not liable for the speech and actions of individual users.17 The section states that “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”18 The Electronic Frontier Foundation, a pro-privacy nonprofit, explains, “online intermediaries that host or republish speech are protected against a wide range of laws that might otherwise be used to hold them legally responsible for what others say and do.”19 Many digital rights organizations emphasize that Section 230 is fundamental for freedom of expression on the Internet because it protects “common carriers” or publishers from being liable for any speech or actions of individual users.

Not all forms of speech, however, are protected by Section 230. Under the PROTECT Our Children Act of 2008, platforms are required to report instances of CSAM (child sexual abuse material) to the National Center for Missing and Exploited Children (NCMEC) through the CyberTipline. While platforms face fines of up to three hundred thousand dollars for knowingly failing to report CSAM to NCMEC, the act specifies that “Nothing in this section shall be construed to require an electronic communication service provider or a remote computing service provider to (1) monitor any user, subscriber, or customer of that provider; (2) monitor the content of any communication of any person described in paragraph; or (3) affirmatively seek facts or circumstances described in sections (a) and (b).”20 In other words, companies are not expected to surveil their users’ communications, but must report CSAM if they become aware of it. This exception to Section 230 raises questions about the forms of speech that are protected by law, but also about the extent of tech companies’ legal obligations to moderate content.21

Because of this discretionary variability, numerous policymakers and legal experts have made calls to further “fix” Section 230 for the supposed loopholes it affords technology companies. Some have suggested that content moderation of only the most egregious content (CSAM or other violent images) does not pose a threat to free speech on the Internet, but rather imposes a necessary “duty of care” upon platforms. 22

Platform Governance

Media service sites such as Facebook and Instagram (owned by Meta), YouTube (owned by Google), or Twitter can be described as “platforms.” Platforms can be interpreted computationally, architecturally, figuratively, and politically:23 1) computationally, a platform is an infrastructure that supports multiple applications and allows code to be written or run;24 2) architecturally, a platform is an elevated, human-built structure that supports particular operations; 3) figuratively, a platform is the foundation or basis justifying an action; 4) politically, a platform is the principle or belief upon which a political candidate may run.  

Digital service providers have relied on the elasticity of the definition of “platform” to function simultaneously as media outlet, advertising outreach tool, and the computational backbone for small developer apps. Platforms can claim that they are not responsible for user-generated content, which is protected under the First Amendment as free speech.  

Digital platforms, however, are inherently political, not neutral substrates upon which users make communications. Platforms make constant decisions about which information to support, maintain, or amplify, in accordance with both internal policies and ideologies as well as national or international regulations. These negotiations have been termed platform governance: how the policies and practices of a corporatized digital platform are mediated by and with political actors and governments.25 Platform governance does not necessarily indicate that platforms perform the work of government, however, some have criticized the rise of “informational capitalism” and the tendency for platforms to act like sovereign governments in their own right without any actual government oversight.26 At the same time, there is also a concern when governments interfere too much in the day-to-day operations of platforms, leading to worries about censorship of free speech. There is an argument to be made, then, that a middle ground needs to be found, one of cooperative responsibility27 or mutual understanding and labor between platforms, governments, and users in maintaining secure and civil public digital space.  

Privacy, Search, and Seizure

Content Moderation as Surveillance?

Most media platforms employ “trust and safety” mechanisms for ensuring the security of their users. These mechanisms may expand upon or fall short of national regulations. However, in recent years platforms have increasingly aligned with government regulations for content governance.  Some legal scholars have argued that the recent “commingling of public and private authority” required by content moderation has troubling implications for data governance.28 Federal law enforcement relies on media platforms to perform content review and design content moderation software, and media platforms rely on government to extend their ability and rationale for doing content review.  

There is also a concern that content moderation at the behest of governments veers too close to surveillance, requiring constant scanning of user information, media, and posted content. For example, content moderation of explicit images relies upon screening new images alongside existing databases of known explicit or illegal material. A CSAM image, for example, is a set of data with a unique “hash value”; duplicate copies of the image would have the exact same hash value. Most CSAM content moderation algorithms perform hash value matching, a process many have described as matching “digital fingerprints.”29 To extend this analogy to a real-life scenario, how might you feel if your fingerprints were constantly being scanned against a database as you walk through public space?  

The criticism of ongoing content moderation, whether algorithmic or human performed, is also related to concerns about the general privatization of law enforcement. Police rely on private technology firms to surveil and control via content regulation, and technology design choices also determine which information is available to law enforcement.30 Law enforcement’s ability to search for violent content is limited by the data available and facilitated by digital platforms, leading to a kind of “dataist statecraft”31 where platform-mediated data directs governmental practice.  

Digital Privacy and the Fourth Amendment

The US Constitution’s Fourth Amendment protects the rights of people to be secure in their persons and property from unreasonable searches and seizures by the government. As technologies advance and proliferate, however, legal experts have had to grapple with what exactly constitutes a “search and seizure” regarding digital property. What are digital users’ “reasonable” expectations of privacy on digital platforms? What qualifies as a person’s digital property?  

In the past several decades, a series of court cases have tested the limits of the government’s right to surveil users’ digital property. In Katz vs. US (1967), the Supreme Court found that the government did not have the right to wiretap a public phone without obtaining a search warrant.32 The case arose when the police wiretapped a public phone booth to surveil Katz’s phone calls, suspicious that he was communicating gambling information across state lines. When Katz was found guilty, he appealed the decision, arguing that the evidence was obtained through an unreasonable search. The court upheld the appeal, establishing that the Fourth Amendment protected Katz’s private phone conversations. A crucial decision in establishing the conditions of Fourth Amendment protections against unreasonable searches in an increasingly electronic world, the case paved the way for individuals’ expectations of privacy both over the phone and online. 

Following in line with US v. Katz, the Sixth Circuit Court of Appeals found in US v. Warshak (2010) that emails are protected by the Fourth Amendment, recognizing how much intimate, personal information is contained by emails today: “the Fourth Amendment must keep pace with the inexorable march of technological progress, or its guarantees will wither and perish.”33 The opinion references other protected forms of communication such as telephone calls in Katz v. US (1967) and letters, arguing that “given the fundamental similarities between email and traditional forms of communication, it would defy common sense to afford emails lesser Fourth Amendment protection.”  

Two additional cases recognize that people have reasonable expectations of privacy in the contents and location history of their cell phones and expand the requirements of law enforcement to obtain warrants for searching digital material.34 The case Riley v. California (2014) determined that law enforcement needs to obtain warrants to search the contents of an arrested individual’s cell phone. Similarly, Carpenter v. United States (2018) establishes that police need warrants to obtain records of people’s movements as recorded by the location tracking features of cell phones.35  

The 2016 case US v. Ackerman steps out of line from this history of protecting digital communications.36 The case arose when the online service provider AOL found suspected CSAM in Walter Ackerman’s email communications through an automated scanning program. As Section 2258A of the Protect Our Children Act stipulates, they reported the case to NCMEC, who informed local law enforcement, and Ackerman was found guilty of possession and distribution of child pornography. Ackerman pled guilty, but argued in court that NCMEC was a government actor and therefore should have obtained a warrant to search his email—in other words, that the evidence used against him was obtained unreasonably. The court found that, because AOL had terminated Ackerman’s account due to his violation of AOL’s terms of service, Ackerman no longer had an expectation of privacy and that the search was not unreasonable. In the Activity module below, students will review the case and its subsequent appeal in 2018 and discuss why Ackerman has such profound implications for digital privacy rights.  

Whereas privacy expectations are much more straightforward in the world of physical possessions—a person’s mail, house, and other material belongings are protected against government searches—the accessibility of digital information makes online privacy far more complex. But many privacy rights advocates argue that Fourth Amendment protections for digital property such as email content are well-established in the legal doctrine. The urgent demand for content moderation against CSAM seems to butt against established user protections for the content of their personal, or even encrypted, messages. 

Encryption and Privacy

Encryption seems to offer a technical solution to protecting user privacy from intrusive government surveillance. Encrypting online information means translating readable language (“plaintext”) into an incomprehensible slew of characters (“ciphertext”) that neither humans nor machines can understand.37 Only those with an algorithmic “key” can translate the ciphertext back into plaintext. Encryption is used to protect all kinds of digital communications, from credit card information to classified government messaging to health care data, but the huge variety of encryption situations means that there is no singular, blanket way to talk about encryption.  

End-to-end encryption is a particular type of messaging where information can only be read by the sender and the recipient.38 For example, the messaging platform WhatsApp, launched in 2009 and acquired by Meta in 2014, is a popular tool used globally for end-to-end encrypted chat messages. While WhatsApp scans unencrypted user profiles and accounts to assist law enforcement under search warrants, it does not produce the content of end-to-end encrypted messages. 39

The lack of ability to scan user messages has raised concerns for law enforcement, who argue that this inaccessibility hinders police access to evidence in criminal cases. Advocates for encryption argue that protected messaging provides secure communication pathways for individuals in dangerous situations, such as victims of domestic violence or those seeking abortions in states where they have been outlawed.40 Pro-encryption groups reason that encryption generally empowers users to speak freely without worrying about intercepted messages.  

But in many cases, a total lack of accessible user education means that users do not know which of their messages are encrypted. For example, Apple’s iMessage application displays messages that are encrypted in blue, whereas nonencrypted messages appear in green, but nowhere in the iMessage app does Apple communicate the distinction to its users (the information is available, however, if users search the Apple Support web page).41 Users looking to communicate information securely often opt for applications such as Signal, Telegram, or WhatsApp, which are more widely known to be encrypted. 

The fear from child protection organizations, of course, is that encrypted messages and platforms afford opportunities for digital bad actors to share illicit content with impunity. The proposed 2022 EARN IT Act, renamed in 2023 as the STOP CSAM Act, are the latest legislative efforts to attempt to hold tech companies “accountable” for CSAM on their platforms, chipping away at the immunity they currently have under Section 230 of the Communications Decency Act.42 However, critics argue that the EARN IT Act places responsibility in the hands of tech companies by opening up more channels for the surveillance of user accounts, messages, and devices and will not actually result in any increase in CSAM detection or child protection more broadly.43 Whether or not these legislative efforts succeed, it seems clear that efforts to create exceptions to user privacy or encrypted messaging hinge upon making the argument about fighting CSAM and protecting children.  

Protection and/vs. Privacy

We suggest that the debate over digital search and content moderation has become set up as a zero-sum game where privacy expectations must be forsaken or given up for children to be safe. In addition, this zero-sum game sets up "safety” as equated with detectability—in other words, the ability for children’s abuse images to be detected, algorithmically or otherwise. We argue that the false equation of safety with detectability is fundamentally flawed because it takes violence as a given: It is predicated upon the idea that violent images will always circulate, and harm will always be expected. Even as its supporters insist that they are developing tech solutions in the name of eventual eradication—what they are doing is calling for eradication of CSAM objects, not the abuse itself.  

Furthermore, there is a worry that content moderation–based solutions rely on “tech fixes” to resolve social issues.44 These tech fixes seem to depend upon media platforms developing better platform governance and internal audit mechanisms, as well as the government being able to monitor and surveil user communications. How can users support the protection of children online without relying on technology companies themselves to handle moderation issues? How can users’ rights to privacy be safeguarded against expanded digital surveillance infrastructure? The detection and removal of CSAM has operated as a kind of limit case, pushing the boundaries of what government—and citizens—feel comfortable having monitored and searched in the name of protection.   

Activity Modules

Students are encouraged to conduct the following three activity modules before going through the general discussion questions.  The modules cover three interconnected issues with moderation: viewing labor, technical design, and privacy ethics.

The Labor of Content Moderation

While media platforms rely on algorithmic tools for the first pass of explicit image review, content moderation is still widely performed by human viewers who make decisions on whether images constitute violations of the platform’s internal rules or violate national or international laws. In 2017, the Guardian news outlet published a series of articles on the difficult work of being a human content moderator for social media platforms, using Facebook (now owned by Meta) as an example.45

  1. Take the quiz, “Ignore or Delete: Could You Be a Facebook Moderator?” https://www.theguardian.com/news/2017/may/21/ignore-or-delete-could-you-be-a-facebook-moderator-quiz (Content note: While this quiz is intended as a pedagogical resource, some viewers may find the content disturbing).  

  2. Discuss your quiz results: What choices were difficult to make? What factors made you consider a particular image to be violent or hate speech? Were you surprised at your results in comparison with Facebook’s recommendation?  

  3. What are some of the potential drawbacks or benefits for technology and media platforms to make their content moderation policies transparent?

  4. What sort of worker protections should exist for human content moderators who review potentially explicit or hateful content?  

Design Challenges for Content Moderation

Considering the ethics of moderation of illicit material is a responsibility that designers and engineers must take on. As this case study has demonstrated, technical fixes alone will not resolve moderation problems, however, design or computing interventions can make moderation features easier or more transparent. Consider what user-oriented technical solutions for content moderation already exist on media platforms you are familiar with:

  1. What in-platform features exist for users to flag questionable content? (buttons, upvoting, etc.)

  2. Discuss platform transparency about moderation: How (if at all) are moderation decisions described or indicated to users?

Create your own moderation plan for a hypothetical platform.

  1. What sort of content will be moderated? What are the mechanisms and challenges with moderating text, images, and videos, respectively?

  2. Discuss what other additional features might enhance content moderation. Would these features be visible to and driven by users? By platform moderators?

  3. What pitfalls exist for platform-driven moderation? Are there privacy concerns with the ways content might be flagged? Do your proposed in-platform features educate users about their privacy options?

Privacy Rights in US v. Ackerman (2016)

In 2013, the digital platform AOL was running algorithmic image detection software that scanned potential CSAM images from the email account of user Walter Ackerman. Following the legal procedure encoded in the 2008 PROTECT Act, AOL submitted a report of the email with four scanned images to the National Center for Missing and Exploited Children (NCMEC). A human analyst at NCMEC reviewed the images to confirm that the algorithmically detected images were indeed CSAM. NCMEC then reported the user to law enforcement, and Ackerman was subsequently arrested and indicted on charges of possession and distribution of child pornography. Ackerman then filed suit that NCMEC, as a “government entity,” had not obtained a search warrant of his email and therefore violated his Fourth Amendment right to protection from unreasonable search and seizure. A 2017 district court found that Ackerman did not have a reasonable expectation of privacy in his email. Ackerman, along with four legal and pro-privacy organizations, appealed this decision in 2018, citing the implications of digital privacy violation far beyond this case. Four organizations filed an amicus brief in support of Ackerman, arguing that individual platforms’ terms of service should not dictate users’ constitutional rights. They point out that the court’s logic is flawed, as it implies that a user’s expectations of privacy under the Fourth Amendment are contingent upon their adherence to individual platforms’ terms of service. The amicus brief says that “district court’s opin­ion under­mines widely recog­nized Fourth Amend­ment protec­tions for emails” and that a third-party platform’s access to communications should not erode a user’s expectation of privacy.46 Those concerned with digital privacy rights recognize the implications of Ackerman more broadly: “The ruling doesn’t just affect child pornography cases—anyone whose account was shut down for any violation of terms of service could lose Fourth Amendment protections over all the emails in their account.”47

  1. Read this overview of the Supreme Court case published by the Brennan Center: https://www.brennancenter.org/our-work/court-cases/us-v-ackerman  

  2. What argument did the government make to justify their search of user email? Do you agree with this justification?  

  3. Should users of email account services such as AOL expect total privacy for the content of their messages?  

  4. Why do you think this case is being appealed? What implications might Ackerman have beyond searches and prosecutions for CSAM possession? 

  5. Draft a framework for a revised interpretation of Ackerman that polices CSAM content without invasions of user privacy. What are the challenges with such a framework?

General Discussion Questions

  • How might political ideologies for privacy run at odds with governmental mandates and public desires to police CSAM (child sexual abuse material)?  

  • What abusive content are tech companies obligated to moderate and what legislation sets this precedent? 

  • What are the arguments for and against digital searches of personal devices and messages for CSAM?   

  • Should algorithmic content moderation count as “search and seizure” under the US Constitution’s Fourth Amendment? 

  • Do you expect privacy on some digital platforms more than others? 

  • Come up with a working definition of “encryption,” and then list as many communication platforms as you can that use some form of user encryption. How do we know which communications platforms are encrypted?  

  • In which cases might encryption of messages be important? Are there cases in which government intrusion into encryption should be necessary?  

Bibliography

Amrute, Sareeta. “Tech Colonialism Today.” Medium, March 5, 2020. https://points.datasociety.net/tech-colonialism-today-9633a9cb00ad.

Appelman, Naomi, and Paddy Leerssen. “On ‘Trusted’ Flaggers.” Yale Journal of Law and Technology 24 (2022): 452–75. https://yjolt.org/trusted-flaggers.

Atanasoski, Neda, and Kalindi Vora. Surrogate Humanity: Race, Robots, and the Politics of Technological Futures. Durham, NC: Duke University Press, 2019. https://www.dukeupress.edu/surrogate-humanity.

Binns, Reuben, Michael Veale, Max Van Kleek, and Nigel Shadbolt. “Like Trainer, Like Bot? Inheritance of Bias in Algorithmic Content Moderation.” In Social Informatics, ed. Giovanni Luca Ciampaglia, Afra Mashhadi, and Taha Yasseri, 405–15. Cham: Springer International Publishing, 2017. https://doi.org/10.1007/978-3-319-67256-4_32.

Bloch-Wehba, Hannah. “Content Moderation as Surveillance.” Berkeley Technology Law Journal 36, no. 3 (2021): 1297–1340. https://btlj.org/wp-content/uploads/2023/01/0012-36-3-Bloch-Wehba_Web.pdf.

Bloch-Wehba, Hannah. Exposing Secret Searches: A First Amendment Right of Access to Electronic Surveillance Orders. SSRN, October 11, 2017. https://papers.ssrn.com/abstract=3050818.

Bogost, Ian, and Nick Montfort. “New Media as Material Constraint: An Introduction to Platform Studies.” In Proceedings of the First International HASTAC Conference, 176–93. Durham, NC: Lulu Press, 2007. 

Citron, Danielle, and Benjamin Wittes. “The Internet Will Not Break: Denying Bad Samaritans § 230 Immunity.” Fordham Law Review 86, no. 2 (2017): 401. https://ir.lawnet.fordham.edu/flr/vol86/iss2/3/.

Cohen, Julie E., ed. Between Truth and Power: The Legal Constructions of Informational Capitalism. New York: Oxford University Press, 2019. https://doi.org/10.1093/oso/9780190246693.002.0007.

Cope, Sophia, and Andrew Crocker. “The STOP CSAM Act: Improved but Still Problematic.” Electronic Frontier Foundation. May 10, 2023. https://www.eff.org/deeplinks/2023/05/stop-csam-act-improved-still-problematic.

Crawford, Kate, and Tarleton Gillespie. “What Is a Flag for? Social Media Reporting Tools and the Vocabulary of Complaint.” New Media & Society 18, no. 3 (2016): 410–28. https://doi.org/10.1177/1461444814543163.

“Current Status and Next Steps for the EARN IT Act of 2022.” Survive EARN IT Act, n.d. Accessed July 20, 2022. https://surviveearnit.com/.

“EFF et al Amicus Brief - U.S. v. Ackerman - 10th Circuit Court of Appeals 2018.” Electronic Frontier Foundation, April 13, 2018. https://www.eff.org/document/eff-et-al-amicus-brief-us-v-ackerman-10th-circuit-court-appeals-2018.

“Facebook’s Encryption Makes It Harder to Detect Child Abuse.” Wired, n.d. Accessed June 27, 2022. https://www.wired.com/story/facebooks-encryption-makes-it-harder-to-detect-child-abuse/.

Fourcade, Marion, and Jeffrey Gordon. 2020. “Learning Like a State: Statecraft in the Digital Age.” Journal of Law and Political Economy 1 (1). https://doi.org/10.5070/LP61150258.

“Fourth Amendment.” EPIC - Electronic Privacy Information Center (blog), n.d. Accessed May 23, 2023a. https://epic.org/issues/privacy-laws/fourth-amendment/.

“Fourth Amendment.” EPIC - Electronic Privacy Information Center (blog), n.d. Accessed May 21, 2023b. https://epic.org/issues/privacy-laws/fourth-amendment/.

Gillespie, Tarleton. “The Politics of ‘Platforms.’” New Media & Society 12, no. 3 (2010): 347–64. https://doi.org/10.1177/1461444809342738.

Gillespie, Tarleton, Patricia Aufderheide, Elinor Carmi, Ysable Gerrard, Robert Gorwa, Adriadna Matamoros-Fernández, Sarah T. Roberts, Aram Sinnreich, and Sarah Myers West. “Expanding the Debate about Content Moderation: Scholarly Research Agendas for the Coming Policy Debates.” Internet Policy Review, 9, no. 4 (2020). https://doi.org/10.14763/2020.4.1512

Gorwa, Robert. “What Is Platform Governance?” Information, Communication & Society 22, no. 6 (2019): 854–71. https://doi.org/10.1080/1369118X.2019.1573914.

Gorwa, Robert, Reuben Binns, and Christian Katzenbach. “Algorithmic Content Moderation: Technical and Political Challenges in the Automation of Platform Governance.” Big Data & Society 7, no. 1 (2020): 2053951719897945. https://doi.org/10.1177/2053951719897945.

Gray, Mary L., and Siddharth Suri. Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass. New York: Harper Business, 2019.

Greenberg, Andy. “Hacker Lexicon: What Is End-to-End Encryption?” Wired, November 25, 2014. https://www.wired.com/2014/11/hacker-lexicon-end-to-end-encryption/.

Helberger, Natali, Jo Pierson, and Thomas Poell. “Governing Online Platforms: From Contested to Cooperative Responsibility.” The Information Society 34, no. 1 (2018): 1–14. https://doi.org/10.1080/01972243.2017.1391913.

House of Representatives, Congress. 47 U.S.C. 230 - Protection for Private Blocking and Screening of Offensive Material. Washington, DC: US Government Publishing Office, 2021. https://www.govinfo.gov/app/details/USCODE-2021-title47/USCODE-2021-title47-chap5-subchapII-partI-sec230/https%3A%2F%2Fwww.govinfo.gov%2Fapp%2Fdetails%2FUSCODE-2021-title47%2FUSCODE-2021-title47-chap5-subchapII-partI-sec230%2Fcontext.

Irani, Lilly. “Justice for ‘Data Janitors.’” Public Books (blog), January 15, 2015. https://www.publicbooks.org/justice-for-data-janitors/.

Katzenbach, Christian. “‘AI Will Fix This’ – The Technical, Discursive, and Political Turn to AI in Governing Communication.” Big Data & Society 8, no. 2 (2021): 20539517211046184. https://doi.org/10.1177/20539517211046182.

Klonick, Kate. “The New Governors: The People, Rules, and Processes Governing Online Speech.” Harvard Law Review 131, no. 6 (2018): 1598. https://harvardlawreview.org/print/vol-131/the-new-governors-the-people-rules-and-processes-governing-online-speech/.

Levin, Sam. “Moderators Who Had to View Child Abuse Content Sue Microsoft, Claiming PTSD.” Guardian, January 12, 2017. https://www.theguardian.com/technology/2017/jan/11/microsoft-employees-child-abuse-lawsuit-ptsd.

Lynch, Jennifer. “Protecting Email Privacy—A Battle We Need to Keep Fighting.” Electronic Frontier Foundation, April 16, 2018. https://www.eff.org/deeplinks/2018/04/protecting-email-privacy-battle-we-need-keep-fighting.

Musto, Jennifer, Mitali Thakor, and Borislav Gerasimov. “Editorial: Between Hope and Hype: Critical Evaluations of Technology’s Role in Anti-Trafficking.” Anti-Trafficking Review no. 14 (2020): 1–14. https://doi.org/10.14197/atr.201220141.

Patella-Rey, P. J. “Beyond Privacy: Bodily Integrity as an Alternative Framework for Understanding Non-Consensual Pornography.” Information, Communication & Society 21, no. 5 (2018): 786–91. https://doi.org/10.1080/1369118X.2018.1428653.

Raval, Noopur. “Interrupting Invisibility in a Global World.” Interactions 28, no. 4 (2021): 27–31. https://doi.org/10.1145/3469257.

Roberts, Sarah. “Commercial Content Moderation: Digital Laborers’ Dirty Work.” In The Intersectional Internet: Race, Sex, Class and Culture Online, ed. Safiya Umoja Noble and Brendesha M. Tynes, 147–59. Oxford: Peter Lang Publishing, 2016. https://ir.lib.uwo.ca/commpub/12.

Roberts, Sarah T. Behind the Screen: Content Moderation in the Shadows of Social Media. New Haven, CT: Yale University Press, 2019.

Roberts, Sarah T. Content Moderation. In Encyclopedia of Big Data, 2017 https://escholarship.org/uc/item/7371c1hf.

Sacharoff, Laurent. “Unlocking the Fifth Amendment:  Passwords and Encrypted Devices.” Fordham Law Review 87, no. 1 (2018): 203. https://ir.lawnet.fordham.edu/flr/vol87/iss1/9/.

Salgado, Richard P. “Fourth Amendment Search and the Power of the Hash.” Harvard Law Review 119, no. 38 (2005): 38–40.

“Section 230.” Electronic Frontier Foundation, n.d. Accessed May 23, 2023. https://www.eff.org/issues/cda230.

Solon, Olivia. “Underpaid and Overburdened: The Life of a Facebook Moderator.” Guardian, May 25, 2017. https://www.theguardian.com/news/2017/may/25/facebook-moderator-underpaid-overburdened-extreme-content.

Stine, Kevin M., and Quynh H. Dang. “Encryption Basics.” NIST 82, no. 5 (2011): 44–46. https://www.nist.gov/publications/encryption-basics.

Thakor, Mitali. “Digital Apprehensions: Policing, Child Pornography, and the Algorithmic Management of Innocence.” Catalyst: Feminism, Theory, Technoscience 4, no. 1 (2018): 1–16. https://doi.org/10.28968/cftt.v4i1.29639.

Thakor, Mitali. “Capture Is Pleasure.” In Your Computer Is on Fire, ed. Thomas S. Mullaney, Benjamin Peters, Mar Hicks, and Kavita Philip. Cambridge, MA: MIT Press, 2021. https://mitpress.mit.edu/9780262539739/your-computer-is-on-fire/.

Thakor, Mitali. “How to Look: Apprehension, Forensic Craft, and the Classification of Child Exploitation Images.” IEEE Annals of the History of Computing 39, no. 2 (2017): 6–8. https://doi.org/10.1109/MAHC.2017.25.

“U.S. v. Ackerman.” Brennan Center for Justice, April 30, 2018. https://www.brennancenter.org/our-work/court-cases/us-v-ackerman.

“U.S. v. Warshak — 6th Circuit Court of Appeals 2010.” Electronic Frontier Foundation, April 13, 2018. https://www.eff.org/document/us-v-warshak-6th-circuit-court-appeals-2010.

Vincent, James. “Facebook Turns over Mother and Daughter’s Chat History to Police Resulting in Abortion Charges.” Verge, August 10, 2022. https://www.theverge.com/2022/8/10/23299502/facebook-chat-messenger-history-nebraska-teen-abortion-case.

“What Is the Difference between IMessage and SMS/MMS?” Apple Support, September 20, 2021. https://support.apple.com/en-us/HT207006.

“WhatsApp Encryption Overview: Technical White Paper.” WhatsApp, January 24, 2023. https://www.whatsapp.com/security/WhatsApp-Security-Whitepaper.pdf.

Comments
0
comment
No comments here