Skip to main content
SearchLoginLogin or Signup

Patenting Bias: Algorithmic Race and Ethnicity Classifications, Proprietary Rights, and Public Data

By focusing on patents on recent algorithms that incorporate publicly available data to yield automated racial and ethnic classification schemes, I provide a glimpse into how engineers and programmers understand and define racial and ethnic categories. Patents provide insights...

Published onAug 26, 2022
Patenting Bias: Algorithmic Race and Ethnicity Classifications, Proprietary Rights, and Public Data


By focusing on patents for recent algorithms that incorporate publicly available data to yield automated racial and ethnic classification schemes, I provide a glimpse into how engineers and programmers understand and define racial and ethnic categories. Patents provide insights into how engineers and programmers encode assumptions about identity and behavior, due to disclosure provisions required by US patent law; similar requirements are present in patent laws throughout the world. Such disclosures provide insights that are otherwise unavailable for most proprietary assets. I further discuss how such classification tools, especially when combined with advertisement-based platforms such as social media, can amplify systemic racism present in the underlying data sets.

🎧Listen to an audio version of this case study.

Keywords: racial and ethnic classifications, algorithmic bias, patents, public data

Tiffany Nichols
Department of the History of Science, Harvard University and Registered Patent Attorney in California and Washington, DC

Author disclosure: This case study is excerpted from Tiffany Nichols, “Patenting automation of race and ethnicity classifications: Protecting neutral technology or disparate treatment by proxy?,” in Abstractions and Embodiments: New Histories of Computing and Society, ed. Janet Abbate and Stephanie Dick (Baltimore: Johns Hopkins University Press, 2022), 102–125. Reprinted with permission of Johns Hopkins University Press.

Background image: US Patent and Trademark Office headquarters in Alexandria, Virginia. Photo: Alan Kotok, licensed under CC BY 2.0. 

Learning Objectives

  • Patents appear to be neutral documents, but upon closer review they provide insights into engineers’ and programmers’ subjective decisions about how to interpret and classify data.

  • Classification schemes can amplify trends present in big data sets. For example, such classification schemes can amplify systemic racism present in a big data set by overlaying additional biases and interpretation that stereotype or define particular groups based on societal assumptions of racial difference.

  • When algorithms that use public data, such as US Census data, are incorporated within patented methods, usage of the data that falls within the scope of the patent becomes a proprietary right of the patent holder.


Patents protecting methods of racial and ethnic classifications are on the rise. Why? The landscape of personalized and finely tuned advertisements has fueled companies’ attempts to know and classify their users, allowing them to carefully craft advertisements based on the behaviors these companies believe consumers of a particular group embody. Such classification schemes parallel age group, location, and interests. Engineers and programmers have also incorporated racial and ethnic classification schemes as another perspective through which to know their user and consumer base, as a proxy for predicting their buying habits.

How do such technologies become proprietary? The main avenue is through patent protection.1 Patents are designed to provide neutral—value-free—descriptions of technologies for use by inventors, companies, courts, attorneys, legal scholars, and the general public. This apparent neutrality obscures the myriad human assumptions and decisions that are incorporated within the technologies described by patent disclosures. The appearance of neutrality—despite the underlying subjective judgments—can be especially troublesome when patents protect technical practices that entrench racial inequality.

This case study focuses on several recent patents for algorithmic methods that aim to classify individual people into racial and ethnic categories. In particular, I explore patents that were issued between 2019 and 2020 to companies such as Verizon, NetSuite, and Meta Platforms, Inc. (owner of the Facebook platform), that grant these companies proprietary protection for techniques that yield automated, predictive assignment of socially constructed categories of race and ethnicity to images, identification documents, and individuals’ last names, usually for the purposes of microtargeted advertising.2 The analysis of patents allows for insights into how the engineers and programmers understand classifications based on racial and ethnic categories underlying these technologies, which would be otherwise inaccessible, due to propriety protections. This case study thereby reveals how the neutral language of these patents masks problematic assumptions and design-choice decisions, linking these recent technological approaches to a much longer history of (long-since debunked) scientific theories about race and ethnicity.3

Each of the patents considered here relies upon machine-learning techniques that crucially depend, in turn, upon constructing large training data sets. The techniques of constructing such training data sets are under increasing scrutiny. As Meredith Broussard notes, “If machine-learning models simply replicate the world as it is now, we won’t move toward a more just society.”4 This is because data reflects the inequalities of the society from which it was collected. Moreover, such data sets are never simply found, preformed; they must be created, and hence they reflect the assumptions, priorities, and decisions of the people who put them together.5

Safiya Umoja Noble has observed that “it is impossible to know when and what influences proprietary algorithm design” beyond the critique that “humans are designing” such systems, because private and corporate platforms place such proprietary products beyond public inspection.6 In direct response to this observation, my aim is to demonstrate that patent documents provide a powerful opportunity to open up the “black box” of often-hidden proprietary computing techniques to reveal some of the assumptions that shape the design and deployment of algorithms. In order for an inventor or a company to receive the twenty-year monopoly afforded by a patent, the patent applicant must engage in a trade-off, disclosing details about their proprietary invention in a specific type of document, which is published and available for public review eighteen months after the filing date. Such public disclosures are meant to fuel innovation and allow for incremental improvements to the patented technology, while enabling the patent holder to use the monopoly period to generate a return on the research and development investment.

The patent disclosures on which I focus provide detailed insights into how inventors and patent attorneys, agents, and examiners conceptualize race and ethnicity. Although the technologies under consideration are fairly new, by consistently ignoring social context and historical examples, their designers inadvertently reinscribe problematic ideas and practices from the past, with the risk of further marginalizing certain individuals and groups of people, a process that Noble has dubbed “technological redlining.”7 Just as nineteenth-century race scientists collected and categorized data, combined with an “accumulation of inferences,” to produce what was seen as an objective “scientific truth” about race and ethnicity classifications, the patents explored here rely on similar assemblages of data and assumptions in applying a racial or ethnic label to a user’s information, justified and naturalized within the neutral-seeming language of patents applications.8 These patented classification practices are particularly troubling in the context of big data and microtargeted advertisements, especially when combined with recent law enforcement practices.

Techniques of Neutralizing Patents and Obscuring Their Human Assumptions and Decisions

Patents are typically seen as neutral documents consisting of unadorned descriptions of allegedly novel technologies and methods. Their dry style, language, and uniform figure aesthetics give the impression of a uniform source. These conventions are both legally required and serve to afford the broadest scope possible for patent protection. The resulting semblance of objectivity and neutrality obscures the very human decisions, assumptions, and labor involved in the production of patent documents and the technologies they describe. These features of patent documentation are particularly troubling when patents describe algorithms that incorporate outdated and/or inaccurate assumptions about race and ethnicity.

One way that patents masquerade as neutral and objective legal instruments is through their language, which is often jokingly referred to as “patent-ese.” This is an overly legal and abstract prose—partially mandated by law and partially reflecting conventions within the field—that uses, for example, “comprising” instead of “has” and “embodiment” instead of “version.9 Patent-ese foregrounds technology while obscuring the human activities and assumptions that underlay the disclosed inventions.10 Patents are written in the passive voice, reinforcing the conceptual disassociation of technologies from the people who design and control them. People actually design the algorithms, decide what data to use, supply the data, and determine the objectives of the processes described in the patent. Likewise, the labor of patent attorneys, patent examiners (who evaluate patent applications and determine whether a patent should be granted), and agents (who work with inventors to draft patent applications and negotiate the scope of a given patent with patent examiners) is hidden from view in the standardized patent documentation.

Software is patented in ways that particularly hide human agency. This is because standalone algorithms are not eligible for patent protections; legal mandates require that only algorithms that interact with tangible elements (such as specific computing hardware) are eligible for patent protections.11 As such, software routines are patented as generic “engines”—understood to be algorithms operating on machinery rather than abstract standalone algorithms. The result is an emphasis upon the agency of machines while obscuring the agency and design decisions of human programmers and software engineers.12

The way in which patents are granted also contributes to the apparent neutrality and credibility of the patented techniques, because the examination process parallels the peer-review system used by refereed academic journals. In the United States, patents are only granted to those inventions that are deemed new, useful, and nonobvious, as determined through a rigorous review by examiners at the United States Patent and Trademark Office (USPTO); the review process can often take several years.13 Applications for new patents for software often receive multiple rounds of review due to the vast number of existing software patents and pending applications. These repeated rounds of review are often perceived as rigorous, but they mostly indicate a field overcrowded with applications.

In the remainder of this case study, I examine some of the human assumptions, decisions, and labor that are obscured by the convention of preparing and evaluating patent applications. These patents provide time-stamped insights into how inventors—often engineers and programmers—understand and define race and ethnicity, and they reveal how automated race and ethnicity classifications are entangled with longer histories of race science and marginalizing activities, resulting in the re-creation and reinforcement of racial disparities. Patents are a rich and largely unexplored source base for scholars seeking to understand the mechanisms through which racial structures are replicated, reinforced, and amplified by technology.

Patents Claiming Last Names as a Proxy for Race and Ethnicity

The invention of US Patent No. 10,636,048, filed in 2017 and granted in 2020 (hereafter “Verizon patent”), describes a technique for assigning racial and ethnic classifications to names collected from contact lists extracted from users’ phones to improve targeted advertisements.14 The patent states that “automatic rather than manual labeling of [the] training data set is desired in the context of constructing a large scale training data set with ethnicity labels,” because—according to the text—parsing through and labeling a set of one million names with the “appropriate ethnicity label” would take “8333 working hours, and such manual labeling places personal information contained within the email at risk.”15

Throughout the patent, the sentences are structured such that the actor is the machine-learning algorithm, thereby masking the human contributions to the system. For example, the patent states, “The machine learning algorithm regressively learns correlations between the features and the ethnicity label in the training data and establishes a model that may be used to classify an unlabeled name into a most probable ethnicity classification.”16 Because the sentence was written to make the machine learning algorithm the actor, it is unclear who is supplying the ethnicity label, the training data, and the parameters for the model. The patent further states:

Last names within each country are sorted by their popularity (such as number of occurrences). The popularity of a last name may be represented by a ranked name ratio of the last name in terms of popularity, between 0 and 1, with 0 representing the highest popularity and 1 representing the lowest popularity. . . . Ethnic compositions or ethnicity ratios of last names may be obtained by looking up the U.S. census data. Ranked name ratio–ethnicity ratio curves for each country may be constructed.17

The patent offers no critical interrogation of whether ethnicity can be discerned from last names or whether such a classification system and training set should be developed at all. Rather, the patent justifies the automated application of race and ethnicity labels in terms of benefits to the user—protecting their personal information—and benefits to the company—an 8,333-hour reduction in human effort. These moves frame the invention as both helpful and benign, while also deterring inquiries into how such categories are applied and whether they should even be applied at all.

The classifications do not stop there. The patent discloses that a country of origin is predicted based on the location where a user’s email account was created.18 US census data is then used in combination with the collected names to determine the “ethnic compositions or ethnicity ratios of last names,” with which the algorithm constructs “ratio–ethnicity ratio curves for each country.”19 The only categories provided are Black, White, Hispanic, API (although not defined in the patent, this category most likely corresponds to Asian Pacific Islander), and Unknown—all selected without justification or definition for user data originating from the United Kingdom, India, Mexico, and the United States.

At first these labels appear to mirror US census labels, but upon further review, there are important differences. For example, the US census labels at the time of the filing of this patent were White, Black or African American, Asian, American Indian and Alaska Native, Native Hawaiian and Other Pacific Island, and “Some Other Race” as racial categories, and Hispanic as an ethnic category.20 (Since 2010, respondents have been able to select multiple categories for race and ethnicity.21) Catherine Bliss explains that several of these categories were originally created by the US Office of Management and Budget (OMB) with the aim to “monitor and redress social inequality,” with the warning that the categories should not be viewed as scientific or anthropological.22

The patented algorithm deploys fewer (and different) categories for race and ethnicity than those used (for different purposes) in the US census. This mismatch should have been a cause for concern, since, as sociologist and legal scholar Dorothy Roberts explains, when underlying criteria for classifying data or people into categories are not consistent, researchers inevitably make “subjective decisions about where to place some subjects.”23 Roberts further notes that racial and ethnic categories vary by country and can change over time—they are, after all, categories introduced for specific social, institutional, or political purposes. Although there is no acknowledgement within the Verizon patent, the labels deployed for the algorithm are not compatible with how racial and ethnic groups are identified within the United Kingdom, India, or Mexico.

In the United Kingdom, for example, “White” is not an omnibus category; rather, the labels Welsh, English, Scottish, Northern Irish, and British are deployed. Instead of the category “Black,” the UK census employs African and Caribbean.24 The different sets of categories used in the two (otherwise similar) countries underscores that racial and ethnic identities are mutable and shaped by social context—a fundamental point that is ignored in the design of the algorithm and obscured in the patent. The patented method further fails to include categories for individuals who are multiethnic and/or multiracial.

Although the categories introduced for the algorithm manifestly do not match labeling schemes used in various parts of the world—including regions for which the patent claims the algorithm should be accurate—the inventors next make a further set of assumptions. They introduce their concept of “homo-ethnicity” to describe a country for which the inventors assume that “the vast majority of its population had the same or similar ethnicity.” Presenting this assumption as fact, the inventors claim that associating last names within a given country with the dominant ethnicity category for that country will produce a “high accuracy” of classification. The inventors designate a country as “homo-ethnic” when one group makes up at least 80 percent of the population. Based on this (arbitrary) threshold, the most frequently occurring last names within a given country are labeled by the algorithm as belonging to the “dominant” ethnicity.25 The inventors offer the United Kingdom, India, and Mexico as examples of homo-ethnic countries—even though (as the UK example makes clear) residents and officials in these countries typically adopt different sets of categorizations that belie such claims about “homo-ethnicity.”26

The designers of the algorithm next introduce assumptions about correlations between behavior and racial or ethnic identity on social networks—akin to efforts throughout the nineteenth century and in the aftermath of redlining and segregation to correlate behavior and race. The patent states, “A user is more likely to communicate frequently with a contact of his/her ethnicity (e.g., to send email to his/her contacts),” and therefore “an ethnicity classifier may be developed by producing a learning document containing account holders’ names and their frequent contact names in close proximity.”27 No supportive evidence is provided for the claim that frequent contacts in close proximity are among people of the same ethnic group, although this assumption is critical to the operation of the algorithm. As the inventors next claim, once the last name of a user is associated with a specific ethnic label, “cultural, social, economic, commercial, and other preferences of the person” can be linked to the user data.28

A second recent patent reveals similar patterns. NetSuite’s US Patent No. 10,430,859, filed in 2016 and granted in 2019 (hereafter “NetSuite patent”), also describes a method for identifying race for the purposes of advertising. NetSuite offers data-driven analysis of customers and their purchases that lends itself to targeted advertising, especially on social media platforms such as Facebook. The inventors of this patent claim a method of “inferring” a user’s ethnicity or nationality based on their first and last names, zip code, address, and purchase history—albeit only a single purchase is necessary.29

The claim scope is “a computer-implemented method for generating a recommendation for a product or service to a customer” that uses census data to gain “information regarding an ethnicity or ethnic group of a person based on one or more of their first name, last name or zip code.”30 The patented method begins by using customer address information obtained from a user’s online purchase or creation of a loyalty account—information that users often provide without so much as a second thought.31 If a user does not provide an address, the patented method uses their internet protocol (IP) address instead.32 This information is then compared with publicly available resources such as census data and a “probability” of the user’s “specific demographic characteristic of interest” is calculated—referred to as an “educated guess” in the patent—based on correlations between demography, locations, and purchases.33 Based on this “guess” of the user’s ethnicity, the automated system provides a “product or service recommendation,” a neutral term for targeted advertisements.34 Like the Verizon patent, the description of the method within the NetSuite patent suggests that the algorithm acts alone, obscuring how the method black-boxes various subjective assumptions and decisions.

The NetSuite patent provides examples of predicting ethnicity based on last name—a mutable form of identification—that reveal just how arbitrary these “educated guesses” can be. For example, Table 1 shows that the algorithm assigns the name “Lee” to users labeled as “Chinese/Asian” with 100 percent probability.35 This assignment is troubling, given that individuals with the last name “Lee” are members of a diverse range of groups—just think of the award-winning filmmaker Shelton Jackson “Spike” Lee or rock guitarist Tommy Lee of the band Mötley Crüe.36 More generally, this discrepancy brings to mind sociologist Ruha Benjamin’s observation that the last names of African Americans and Filipinos are not predictive of their racial or ethnic group due to histories of slavery, forced emigration, and colonialization.37

Table 1

Table from US Patent No. 10,430,859 (“NetSuite patent”), Col. 20, Lns. 25–34, Showing Assignment of Ethnicity to Last Names

Last name

Probability of being male

Probability of being female

Probability of ethnic background identification


not available

not available

not available




100% Eastern European / White


not available

not available

not available




100% Eastern European / White


not available

not available

100% Chinese / Asian


not available

not available

100% Indian / Asian

Although both the Verizon and NetSuite patents describe algorithms that are predicated on a series of assumptions about human diversity and individual behavior—few of which are compatible with decades of research by social scientists, legal scholars, or other experts—the deployment of such algorithms threatens to reinforce historical marginalization. Such algorithms enable companies to target advertisements—including for jobs, real estate, news, and public health content—to some users rather than others, based on assumptions about racial or ethnic identity.

Beyond the problematic assumptions about identity and behavior, the Verizon and NetSuite patents both engage in a move to privatize methods that use US census data—a public resource collected with taxpayer funding—for private profit. Such maneuvers arguably run counter to the primary purpose of the census, and can even limit what others do with such public data for fear of litigation arising from claims of patent infringement.

Patent Linking Racial and Ethnic Classifications to Surveillance Technologies

Algorithmic methods that aim to link racial and ethnic identifications to individual users are deployed well beyond targeted advertising. Consider US Patent No. 10,706,277, filed in 2019 and issued to Facebook in 2020 (hereafter “Facebook patent”).38 This patent describes a method for automatically predicting the race and ethnicity of a person in an image associated with a government-issued document, such as a driver’s license.39 The method makes use of additional data, such as arrest records, demonstrating that Facebook directly profits from public carceral data. As media studies scholar Siva Vaidhyanathan has observed, “Facebook’s playbook has seemed to be slowly and steadily acclimating users to a system of surveillance and distribution that if introduced all at once might seem appalling.”40

The Facebook patent summarizes the invention as extracting characteristics from a captured image and comparing those characteristics with “priori knowledge” in order to yield a prediction of a depicted person’s race and ethnicity.41 The inventor acknowledges that such “priori knowledge” is drawn from (in his own words) “sensitive”—and, in some cases, “confidential”—information. As he explains, the additional information can “include one or more of a name, an address, a social security number, an identification number, banking information, a date of birth, a driver’s license number, an account number, financial information, transcript information, an ethnicity, arrest record, health information, medical information, email addresses, phone numbers, web addresses, IP numbers, or photographic data associated with the person.”42

Once a personal identification document has been captured by the patented system—often with additional cooperation from the user, who provides a self-portrait on their smart phone—all possible personal identifying information is extracted.43 If the identification document is a driver’s license, then the system also extracts the sex, height, and full-face photograph of the individual.44 Based on this information, the user is linked with a race or ethnicity classification and also with known arrest records.

In short, Facebook—a for-profit social media company—holds patent rights at least until 2039 (twenty years after the initial patent filing) for a method of extracting personal information from government documents, assigning racial and ethnic labels to individual users, and producing individualized profiles that aggregate everything from private financial and medical histories to arrest records. Facebook’s own Terms of Service includes provisions that allow for the sharing of such user profiles with other Facebook companies as well as with law enforcement agencies.45

As early as 2013, the US Department of Justice released a report promoting the use by law enforcement agencies of Facebook and other social media platforms. Local police departments were encouraged to “mine” the data available on such commercial sites “to identify victims, witnesses, and perpetrators. Witnesses to crime—and even perpetrators—often post photographs, videos, and other information about an incident that can be used as investigative leads or evidence.”46 In more recent years, law enforcement agencies have used data from social media companies, including Facebook, to identify and arrest protesters in the wake of police shootings of unarmed Black people, such as Breonna Taylor, George Floyd, and Jacob Blake.47 Facebook, in turn, has provided information for law enforcement agencies on how to create a “successful” Facebook presence. Among the suggestions: how to create Facebook posts with images of wanted individuals.48

Where Do We Go from Here?

On September 12, 2020, a manhunt was underway for the shooting of two deputies of the Los Angeles Sheriff’s Department in Compton, California. The Los Angeles County Sheriff’s Department announced that the suspect was “a dark skinned male.”49 Posts immediately appeared across social media platforms, including the Lakewood, CA Regional Crime Awareness, Prevention & Safety Group on Facebook, naming an alleged culprit.50 The posts included photographs of Compton resident Darnell Hicks, his driver’s license photo, address, a photo of his car, and license plate number along with claims that he was armed and dangerous.

Hicks received nonstop notifications on his phone with messages like “be on the lookout,” along with images of the social media posts. At first Hicks thought the notifications were a prank, but he quickly learned just how serious the situation had become. Although Hicks was cleared by a tweet from the LA County Sheriff’s Department, he received death threats, including social media posts by private citizens calling for him to be “shot on sight,” for more than two weeks. He has also had to retain an attorney.51

The LA County Sheriff’s Department, journalists, attorneys, and even those disseminating the false information about Hicks all referred to the social media posts, and yet neither Facebook nor Twitter were ever implicated in the harm done to Hicks and his family. Once Hicks’ photo and personal data were posted to Facebook and shared without his permission, this information was likely incorporated into Facebook’s big data and algorithmic assets. Although the Sheriff’s Department announced that the information was erroneous, the damage had been done. Hicks had been falsely accused; his personal information had been made available via commercial social media platforms, including Facebook; and he and his family had to endure death threats for weeks.

The Facebook post continues to have a life well beyond the dramatic events of September 2020. Darnell Hicks’ image and location information are now part of enormous social media data assets, where they have been tagged with a racial identification, a geographic location, and keywords including “armed,” “dangerous,” “murder,” “wanted,” “black male,” “gang member,” and “crime.” Even though the LA County Sheriff’s Department exonerated him, Facebook likely will not. After all, algorithms like those protected by US Patent No. 10,706,277 (the “Facebook patent”) rely critically on access to public data from government agencies, linking such data to that collected by Facebook; yet the companies that develop and deploy such algorithms are beholden to few of the laws or regulations that pertain to government agencies in the United States, such as constitutional protections against “unreasonable searches.” In this regard, the Electronic Frontier Foundation has obtained Facebook’s 2009 and 2010 “Facebook Law Enforcement Guidelines,” revealing that Facebook provides profile and usage data in response to preservation, formal, and emergency requests from law enforcement.52 The commercial platforms are also exempted from libel laws in the United States, which otherwise prohibit falsely accusing individuals of crimes.

As the examples of the Verizon, NetSuite, and Facebook patents make clear, algorithms that rely on large training data sets, and the patents that protect them, are not neutral. Rather, they incorporate tacit, at-times problematic assumptions about identity and behavior—some of which are little different than long-since debunked ideas that date back centuries—while naturalizing those subjective judgments with an appearance of objectivity and neutrality. Inventors’ colloquial understandings of race and ethnicity inform choices of which labels to use and what correlated behaviors to expect. The resulting algorithms can reinforce those subjective notions by targeting advertisements—for jobs, products, news, and more—to some people but not to others. Moreover, the selection and reinforcement of certain racial and ethnic categories in these technologies can inadvertently amplify historical practices, ranging from forced emigration and colonialism, to racist housing and lending practices, to biased policing patterns, that have shaped where people live and with whom they most often associate.53

Each of the patents described here relies upon the power to monetize public data for proprietary gain, ranging from US census data to arrest records. Such information is collected at taxpayers’ expense, yet few provisions exist that could ensure that the commercial purposes to which the data are redirected will benefit the public good.

Discussion Questions

• How is systemic racism manifested in big data sets? What underlying policies and histories have contributed to such characteristics in big data sets? Have you seen or interacted with big data sets that may embody systemic racism?

• Does the patent examination process normalize algorithmic bias, systemic racism, or related forms of discrimination?

• Do classification schemes amplify systemic racism? If so, in what ways? What techniques, practices, or regulations might counter such effects?

• Are there limitations on the collection of user data that you would place to prevent combinations of big data sets that can amplify systemic racism?


I thank Janet Abbate, Alex Csiszar, William Deringer, Gerardo Con Diaz, Stephanie Dick, Avriel Epps-Darling, Peter Galison, David Kaiser, Jovonna Jones, Asya Magazinnik, Zachary Schutzman, Justin Steil, Erica Sterling, Cori Tucker-Price, and the anonymous reviewers for the Johns Hopkins University Press for their insightful comments and feedback on earlier versions of this case study.


Alexander, Michelle. The New Jim Crow: Mass Incarceration in the Age of Colorblindness. New York: New Press, 2010.

Benjamin, Ruha. Race after Technology: Abolitionist Tools for the New Jim Code. Medford, MA: Polity, 2019.

Bliss, Catherine. Race Decoded: The Genomic Fight for Social Justice. Stanford, CA: Stanford University Press, 2012.

Bouk, Dan. “Error, Uncertainty, and the Shifting Ground of Census Data.” Harvard Data Science Review 2, no. 2 (2020).

Broussard, Meredith. Artificial Unintelligence: How Computers Misunderstand the World. Cambridge, MA: MIT Press, 2018.

Burk, Dan L., and Jessica Reyman. “Patents as Genre: A Prospectus.” Law & Literature 26, no. 2 (2014): 163–90.

Coates, Ta-Nehisi. “The Case for Reparations.” Atlantic, June 2014.

Con Diaz, Gerardo. “Intangible Inventions: A History of Software Patenting in the United States, 1945–1985.” Enterprise & Society 18, no. 4 (2017): 784–94.

D’Ignazio, Catherine, and Lauren Klein, “Who Collects the Data? A Tale of Three Maps.” MIT Case Studies in Social and Ethical Responsibilities of Computing no. 1 (Winter 2021).

Fabian, Ann. The Skull Collectors: Race, Science, and America’s Unburied Dead. Chicago: University of Chicago Press, 2020.

Farberov, Snejana. “Compton Father-of-Two, 33, Says He Is Receiving Death Threats after Being Falsely Accused on Social Media of Shooting of Two LA County Sheriff's Deputies.” Daily Mail, September 15, 2020.

Farivar, Cyrus, and Olivia Solon. “FBI Trawled Facebook to Arrest Protesters for Inciting Riots, Court Records Show.” NBC News, June 19, 2020.

Gabriel, Abram. “A Biologist’s Perspective on DNA and Race in the Genomics Era.” In Genetics and the Unsettled Past: The Collision of DNA, Race, and History, ed. Keith Wailoo, Alondra Nelson, and Catherine Lee, 43-66. New Brunswick, NJ: Rutgers University Press, 2012.

Gowrappan, Guru. “Introducing Verizon Media.” December 18, 2018. Verizon.

Haller, John S., Jr. Outcasts from Evolution: Scientific Attitudes of Racial Inferiority, 1859–1900. Carbondale: Southern Illinois University Press, 1971.

Hammonds, Evelynn, and Rebecca Herzig, eds. The Nature of Difference: Sciences of Race in the United States from Jefferson to Genomics. Cambridge, MA: MIT Press, 2008.

Haney López, Ian. White by Law: The Legal Construction of Race. New York: New York University Press, 1996.

Hays, Kali. “After the 2020 Protests in Kenosha over the Police Shooting of Jacob Blake, Law Enforcement Went Digging for Private Information from Facebook. They Got Everything They Wanted.” Business Insider, February 18, 2022.

Hicks, Darnell. “Who Is Trying to Frame Darnell Hicks as the Shooter of 2 Deputies in Compton?” Interview by Street TV, September 16, 2020.

Ignatyev, Oleksiy. System and Method of Generating a Recommendation of a Product or Service Based on Inferring a Demographic Characteristic of a Customer. US Patent No. 10,430,859, filed March 24, 2016, and issued October 1, 2019.

Kant, Tanya. “Identity, Advertising, and Algorithmic Targeting: Or How (Not) to Target Your ‘Ideal Use.’” MIT Case Studies in Social and Ethical Responsibilities of Computing no. 2 (Summer 2021).

Lays Stephan, Nancy, and Sander L. Gilman. “Appropriating the Idioms of Science: The Rejection of Scientific Racism.” In The “Racial” Economy of Science: Toward a Democratic Future, ed. Sandra Harding, 170–94. Bloomington: Indiana University Press, 1993.

Madrigal, Alexis C. “The Racist Housing Policy that Made your Neighborhood.” Atlantic, May 22, 2014.

Martin, Erika, and Kimberly Cheng and Nexstar Media Wire. “California Father Falsely Accused in Ambush of Sheriff’s Deputies Speaks Out.” Fox 8 Local News, Los Angeles, September 15, 2020.

Umoja Noble, Safiya. Algorithms of Oppression: How Search Engines Reinforce Racism. New York: New York University Press, 2018.

Nobles, Melissa. Shades of Citizenship: Race and the Census in Modern Politics. Stanford, CA: Stanford University Press, 2000.

Nurik, Chloé Lynn. “Facebook and the Surveillance Assemblage: Policing Black Lives Matter Activists and Suppressing Dissent.” Surveillance & Society 20, no. 1 (2022): 30–46.

Office for National Statistics. “Ethnic Group, National Identity, and Religion.” UK Statistics Authority, January 18, 2016.

Pew Research Center. “What Census Calls Us.” February 26, 2020.

Pratt, Mary Louise. “Scratches on the Face of the Country; or, What Mr. Barrow Saw in the Land of the Bushmen.” Critical Inquiry 12, no. 1 (1985): 119–43.

Roberts, Dorothy. Fatal Invention: How Science, Politics, and Big Business Re-Create Race in the Twenty-First Century. New York: New Press, 2011.

Rodríguez, Carla. Changing Race: Latinos, the Census and the History of Ethnicity. New York: NYU Press, 2000.

Rodriguez, Raphael A. Storing Anonymized Identifiers Instead of Personally Identifiable Information. US Patent No. 10,706,277, filed February 19, 2019, issued July 7, 2020.

Suresh, Harini, and John Guttag. “Understanding Potential Sources of Harm throughout the Machine Learning Life Cycle.” MIT Case Studies in Social and Ethical Responsibilities of Computing no. 2 (Summer 2021).

US Census Bureau. “Overview of Race and Hispanic Origin: 2010.” US Department of Commerce, Economics and Statistics Administration, March 2011.

US Department of Justice. “Social Media and Tactical Considerations of Law Enforcement.” Community Oriented Policing Services (COPS), July 2013.

US Patent and Trademark Office. Assignment Database. Reel/Frame: 058520/0535.

US Patent and Trademark Office. “2016 Patent Subject Matter Eligibility.” In Manual of Patent Examining Procedure, ed. Robert A. Clarke. 9th ed. Alexandria, VA: USPTO, Department of Commerce, 2020.

US Patent and Trademark Office. “2103 Patent Examination Process.” In Manual of Patent Examining Procedure, ed. Robert A. Clarke. 9th ed. Alexandria, VA: USPTO, Department of Commerce, 2020.

Vaidhyanathan, Siva. Antisocial Media: How Facebook Disconnects Us and Undermines Democracy. New York: Oxford University Press, 2018.

Villanueva, Alex. “Advisory: Manhunt Underway for Suspect in Ambush Shooting of 2 LA Sheriff’s Deputies in Compton.” Los Angeles County Sheriff’s Department Information Bureau (SIB), September 12, 2020.

Winton, Richard. “Social Media Accused Him of Ambushing Two Deputies. It Was Fake News, but He’s Paid a Steep Price.” Los Angeles Times, September 16, 2020.

Ye, Junting, Yigan Hu, Baris Coskun, Meizhu Liu, and Steven Skiena. Name-Based Classification of Electronic Account Users. US Patent No. 10,636,048, filed January 27, 2017, issued April 28, 2020.

No comments here