An open access publication of the American Academy of Arts & Sciences
Spring 2022

Distrust of Artificial Intelligence: Sources & Responses from Computer Science & Law

Authors
Cynthia Dwork and Martha Louise Minow
View PDF
Abstract

Social distrust of AI stems in part from incomplete and faulty data sources, inappropriate redeployment of data, and frequently exposed errors that reflect and amplify existing social cleavages and failures, such as racial and gender biases. Other sources of distrust include the lack of “ground truth” against which to measure the results of learned algorithms, divergence of interests between those affected and those designing the tools, invasion of individual privacy, and the inapplicability of measures such as transparency and participation that build trust in other institutions. Needed steps to increase trust in AI systems include involvement of broader and diverse stakeholders in decisions around selection of uses, data, and predictors; investment in methods of recourse for errors and bias commensurate with the risks of errors and bias; and regulation prompting competition for trust.

Cynthia Dwork, a Fellow of the American Academy since 2008, is the Gordon McKay Professor of Computer Science in the John Paulson School of Engineering and Applied Sciences and Radcliffe Alumnae Professor in the Radcliffe Institute for Advanced Study at Harvard University. Her work has established the pillars of fault-tolerant distributed systems, modernized cryptography to the ungoverned interactions of the Internet and the era of quantum computing, revolutionized privacy-preserving statistical data analysis, and launched the field of algorithmic fairness.

Martha Minow, a Fellow of the American Academy since 1992, is the 300th Anniversary University Professor and Distinguished Service Professor at Harvard University. She also serves as Cochair of the American Academy’s project on Making Justice Accessible: Designing Legal Services for the 21st Century. She is the author of, most recently, Saving the News: Why the Constitution Calls for the Government to Act to Preserve the Freedom of Speech (2021), When Should Law Forgive? (2019), and In Brown’s Wake: Legacies of America’s Educational Landmark (2010).

Works of imagination, from Frankenstein (1818) to the film 2001: A Space Odyssey (1968) and the Matrix series (1999–2021), explore fears that human-created artificial intelligences threaten human beings due to amoral logic, malfunctioning, or the capacity to dominate.1 As computer science expands from human-designed programs spelling out each step of reasoning to programs that automatically learn from historical data, infer outcomes for individuals not yet seen, and influence practices in core areas of society–including health care, education, transportation, finance, social media, retail consumer businesses, and legal and social welfare bureaucracies–journalistic and scholarly accounts have raised questions about reliability and fairness.2 Incomplete and faulty data sources, inappropriate redeployment of data, and frequently exposed errors amplify existing social dominance and cleavages. Add mission creep– like the use of digital tools intended to identify detainees needing extra supports upon release to instead determine release decisions–and it is no wonder that big data and algorithmic tools trigger concerns over loss of control and spur decay in social trust essential for democratic governance and workable relationships in general.3

Failures to name and comprehend the basic terms and processes of AI add to specific sources of distrust. Examining those sources, this essay ends with potential steps forward, anticipating both short-term and longer-term challenges.

Artificial intelligence signifies a variety of technologies and tools that can solve tasks requiring “perception, cognition, planning, learning, communication, and physical actions,” often learning and acting without oversight by their human creators or other people.4 These technologies are already much used to distribute goods and benefits by governments, private companies, and other private actors.

Trust means belief in the reliability or truth of a person or thing.5 Associated with comfort, security, and confidence, its absence infers doubt about the reliability or truthfulness of a person or thing. That doubt generates anxieties, alters behaviors, and undermines cooperation needed for private and public action. Distrust is corrosive.

Distrust is manifested in growing calls for regulation, the emergence of watchdog and lobbying groups, and the explicit recognition of new risks requiring monitoring by corporate audit committees and accounting firms.6 Critics and advocates alike acknowledge that increasing deployment of AI could have unintended but severe consequences for human lives, ranging from impairments of friendships to social disorder and war.7 These concerns multiply in a context of declining trust in government and key private institutions.8

An obvious source of distrust is evidence of unreliability. Unreliability could arise around a specific task, such as finding that your child did not run the errand to buy milk as you requested. Or it could follow self-dealing: did your child keep the change from funds used to purchase the milk rather than returning the unused money to you? Trust is needed when we lack the time or ability to oversee each task to ensure truthful and accurate performance and devotion to the interests of those relying on the tasks being done.

Political theorist Russell Hardin explains trust as “encapsulated interest, in which the truster’s expectations of the trusted’s behavior depend on assessments of certain motivations of the trusted. I trust you because your interests encapsulate mine to some extent–in particular, because you want our relationship to continue.”9 Trust accordingly is grounded in the truster’s assessment of the intentions of the trusted with respect to some action.10 Trust is strengthened when I believe it is in your interest to adhere to my interests in the relevant matter.11 Those who rely on institutions, such as the law, have reasons to believe that they comport with governing norms and practices rather than serving other interests. Trust in hospitals and schools depends on assessments of the reliability of the institution and its practices in doing what it promises to do, as well as its responses to inevitable mistakes.12 With repeated transactions, trust depends not only on results, but also on discernable practices reducing risks of harm and deviation from expected tasks. Evidence that the institution or its staff serves the interests of intended beneficiaries must include guards against violation of those interests. Trust can grow when a hospital visibly uses good practices with good results and communicates the measures to minimize risks of bad results and departures from good practices.

External indicators, such as accreditation by expert review boards, can signal adherence to good practices and reason to expect good results. External indicators can come from regulators who set and enforce rules, such as prohibitions of self-dealing through bans on charging more than is justifiable for a procedure and prohibiting personal or institutional financial interests that are keyed to the volume of referrals or uses.13 Private or governmental external monitors must be able to audit the behavior of institutions.14 External review will not promote trust if external monitors are not themselves trusted. In fact, disclosure amid distrust can feed misunderstandings.15

Past betrayals undermine trust. Personal and collective experiences with discrimination or degradation–along lines of race, class, gender, or other personal characteristics–especially create reasons for suspicion if not outright distrust. Similarly, experiences with self-interested companies that make exploitative profits can create or sustain distrust. Distrust and the vigilance it inspires may itself protect against exploitation.16

These and further sources of distrust come with uses of AI, by which we mean: a variety of techniques to discern patterns in historical “training” data that are determinative of status (is the tumor benign?) or predictive of a future outcome (what is the likelihood the student will graduate within four years?). The hope is that the patterns discerned in the training data will extend to future unseen examples. Algorithms trained on data are “learned algorithms.” These learned algorithms classify and score individuals as the system designer chose, equitably or not, to represent them to the algorithm. These representations of individuals and “outcomes” can be poor proxies for the events of interest, such as using re-arrest as a proxy for recidivism or a call to child protective services as a proxy for child endangerment.17 Distrust also results from the apparent indifference of AI systems. Learned algorithms lack indications of adherence to the interests of those affected by their use. They also lack apparent conformity with norms or practices legible to those outside of their creation and operations.

When designed solely at the directive of governments and companies, AI may only serve the interests of governments and companies–and risk impairing the interests of others.

Despite sophisticated techniques to teach algorithms from data sets, there is no ground truth available to check whether the results match reality. This is a basic challenge for ensuring reliable AI. We can prove that the learned algorithm is indeed the result of applying a specific learning technique to the training data, but when the learned algorithm is applied to a previously unseen individual, one not in the training data, we do not have proof that the outcome is correct in terms of an underlying factual basis, rather than inferences from indirect or arbitrary factors. Consider an algorithm asked to predict whether a given student will graduate within four years. This is a question about the future: when the algorithm is applied to the data representing the student, the answer has not yet been determined. A similar quandary surrounds risk scoring: what is the “probability” that an individual will be re-arrested within two years? This question struggles to make sense even mathematically: what is the meaning of the “probability” of a nonrepeatable event?18 Is what we perceive as randomness in fact certainty, if only we had sufficient contextual information and computing power? Inferences about the future when predicated on limited or faulty information may create an illusion of truth, but illusion it is.

Further problems arise because techniques for building trust are too often unavailable with algorithms used for scoring and categorizing people for public or private purposes. Familiar trust-building techniques include transparency so others can see inputs and outcomes, opportunities for those affected to participate in designing and evaluating a system and in questioning its individual applications, monitoring and evaluation by independent experts, and regulation and oversight by government bodies.

Trust in the fairness of legal systems increases when those affected participate with substantive, empowering choices within individual trials or panels reviewing the conduct of police and other officials. Could participation of those affected by AI help build trust in uses of AI?19 Quite apart from influencing outcomes, participation gives people a sense that they are valued, heard, and respected.20

 Participatory procedures signal fairness, help to resolve uncertainties, and support deference to results.21 Following prescribed patterns also contributes to the perceived legitimacy of a dispute resolution system.22

But there are few if any roles for consumers, criminal defendants, parents, or social media users to raise questions about the algorithms used to guide the allocation of benefits and burdens. Nor are there roles for them in the construction of the information-categorizing algorithms. Opportunities to participate are not built into the design of algorithms, data selection and collection protocols, or the testing, revision, and use of learning algorithms. Ensuring a role for human beings to check algorithmic processes can even be a new source of further inaccuracies. An experiment allowing people to give feedback to an algorithmically powered system actually showed that participation lowered trust–perhaps by exposing people to the scope of the system’s inaccuracies.23

Suggestions for addressing distrust revolve around calls for “explainability” and ensuring independent entities access to the learned algorithms themselves.24 “Access” can mean seeing the code, examining the algorithm’s outputs, and reviewing the choice of representation, sources of training data, and demographic benchmarking.25 But disclosure of learning algorithms themselves has limited usefulness in the absence of data features with comprehensible meanings and explanations of weight determining the contribution of each feature to outcomes. Machine learning algorithms use mathematical operations to generate data features that almost always are not humanly understandable, even if disclosed, and whose learned combinations would do nothing to explain outcomes, even to expert auditors.

Regulation can demand access and judgments by qualified experts and, perhaps more important, require behavior attentive not only to narrow interests but also to broader public concerns. Social distrust of X-rays produced demands for regulation; with regulation, professional training, and standards alert to health effects, X-rays gained widespread trust.26 Yet government regulators and independent bodies can stoke public fears if they contribute to misinformation and exaggerate risks.27

For many, reliance on AI arouses fears of occupational displacement. Now white collar as well as blue collar jobs seem at risk. One study from the United Kingdom reported that more than 60 percent of people surveyed worry that their jobs will be replaced by AI. Many believe that their jobs and opportunities for their children will be disrupted.28 More than one-third of young Americans report fears about technology eliminating jobs.29 Despite some predictions of expanded and less-repetitive employment, no one can yet resolve doubts about the future.30 Foreboding may be exacerbated by awareness that, by our uses of technology, we contribute to the trends we fear. Many people feel forced to use systems such as LinkedIn or Facebook.31 People report distrust of the Internet but continue to increase their use of it.32

Some distrust AI precisely because human beings are not visibly involved in decisions that matter to human beings. Yet even the chance to appeal to a human is insufficient when individuals are unaware that there is a decision or score affecting their lives.

As companies and governments increase their use of AI, distrust mounts considerably with misalignment of interests. Airbnb raised concerns when it acquired Trooly Inc., including its patented “trait analyzer” that operates by scouring blogs, social networks, and commercial and public databases to derive personality traits. The patent claims that “the system determines a trustworthiness score of the person based on the behavior and personality trait metrics using a machine learning system,” with the weight of each personality trait either hard coded or inferred by a machine learning model.33 It claims to identify traits as “badness, anti-social tendencies, goodness, conscientiousness, openness, extraversion, agreeableness, neuroticism, narcissism, Machiavellianism, or psychopathy.”34 Although Airbnb asserts that the company is not currently deploying this software,35 the very acquisition of a “trait analyzer” raises concerns that the company refuses to encapsulate the interests of those affected.36

Examples of practices harming and contrary to the interests of users abound in social media platforms, especially around demonstrated biases and invasions of privacy. Although social media companies offer many services that appeal to users, the companies have interests that diverge systematically from those of users. Platform companies largely profit off data generated by each person’s activities on the site. Hence, the companies seek to maximize user “engagement.” Each new data point comes when a user does–or does not–click on a link or hit a “like” button. The platform uses that information to tailor content for users and to sell their information to third parties for targeted advertising and other messages.37 Chamath Palihapitiya, former vice president for “user growth” for Facebook, has claimed that Facebook is addictive by design.38 Sean Parker, an original Facebook investor, has acknowledged that the site’s “like” button and news feed keep users hooked by exploiting people’s neurochemical vulnerabilities.39

Privacy loss is a particular harm resented by many. Privacy can mean seclusion, hiding one’s self, identity, and information; it can convey control over one’s personal information and who can see it; it can signal control over sensitive or personal decisions, without interference from others; or it can mean protection against discrimination by others based on information about oneself. All these meanings matter in the case of Tim Stobierski, who, shortly after starting a new job at a publishing house, was demonstrating a Facebook feature to his boss when an advertisement for a gay cruise appeared on his news feed.40 He wondered, “how did Facebook know that I was interested in men, when I had never told another living soul, and when I certainly had never told Facebook?”41 The Pew Research Center showed that about half of all Facebook users feel discomfort about the site’s collection of their interests, while 74 percent of Facebook users did not know how to find out how Facebook categorized their interests or even how to locate a page listing “your ad preferences.”42 A platform’s assumptions remain opaque even as users resent the loss of control over their information and the secret surveillance.43 Tech companies may respond that users can always quit. Here, too, a conflict of interests is present. Facebook exposes individuals to psychological manipulation and data breaches to degrees that they cannot imagine.44 Most users do not even know how Facebook uses their data or what negative effects can ensue.45 The loss of control compounds the unintended spread of personal information.

The interests of tech platforms and users diverge further over hateful speech. Facebook’s financial incentive is to keep or even elicit outrageous posts because they attract engagement (even as disagreement or disgust) and hence produce additional monetizable data points.46 Facebook instructs users to hide posts they do not like, or to unfollow the page or person who posted it, and, only as a third option, to report the post to request its removal.47 Under pressure, Facebook established an oversight review board and charged it with evaluating (only an infinitesimal fraction of ) removal decisions. Facebook itself determines which matters proceed to review.48 Directed to promote freedom of speech, not to guard against hatred or misinformation, the board has so far done little to guard against fomented hatred and violence.49

Large tech companies are gatekeepers; they can use their position and their knowledge of users to benefit their own company over others, including third parties that pay for their services.50 As one observer put it, “social media is cloaked in this language of liberation while the corporate sponsors (Facebook, Google et al.) are progressing towards ever more refined and effective means of manipulating individual behavior (behavioral targeting of ads, recommendation systems, reputation management systems etc.).”51

The processes of AI baffle the open and rational debates supporting democracies, markets, and science that have existed since the Enlightenment. AI practices can nudge and change what people want, know, and value.52 Differently organized, learned algorithms could offer people some control over site architecture and content moderation.53

Dangers from social media manipulation came to fruition with the 2020 U.S. presidential election. Some conventional media presented rumors and falsehood, but social media initiated and encouraged misinformation and disinformation, and amplified their spread, culminating in the sweeping erroneous belief that Donald Trump rather than Joe Biden had won the popular vote. False claims of rigged voting machines, despite the certification of state elections, reflected and inflamed social distrust.54 The sustainability of our democratic governance systems is far from assured.

Building trust around AI can draw on commitments to participation, useable explanations, and iterative improvements. Hence, people making and deploying AI should involve broader and diverse stakeholders in decisions around what uses algorithms are put to; what data, with which features, are used to train the algorithms; what criteria are used in the training process to evaluate classifications or predictions; and what methods of recourse are available for raising concerns about and securing genuine responsive action to potentially unjust methods or outcomes. Creative and talented people have devised AI algorithms able to infer our personal shopping preferences; they could deploy their skills going forward to devise opportunities for those affected to participate in identifying gaps and distortions in data. Independent experts in academic and nonprofit settings–if given access to critical information–could provide much-needed audits of algorithmic applications and assess the reliability and failures of the factors used to draw inferences.

Investment in participatory and information-sharing efforts should be commensurate with the risks of harms. Otherwise, the risks are entirely shifted to the consumers, citizens, and clients who are subjected to the commercial and governmental systems that deploy AI algorithms.

As AI escalates, so should accessible methods of recourse and correction. Concerns for people harmed by harassment on social media; biased considerations in employment, child protection, and other governmental decisions; and facial recognition technologies that jeopardize personal privacy and liberty will be echoed by known and unknown harms in finance, law, health care, policing, and war-making. Software systems to enable review and to redress mistakes should be built, and built to be meaningful. Designers responding that doing so would be too expensive or too difficult given the scale enabled by the use of AI algorithms are scaling irresponsibly. Responsible scaling demands investment in methods of recourse for errors and bias commensurate with the risks of errors and bias. AI can and must be part of the answer in addressing the problems created by AI, but so must strengthened roles for human participation. Government by the consent of the governed needs no less.55

Self-regulation and self-certification, monitoring by external industry and consumer groups, and regulation by government can tackle misalignment and even clashes in the interests of those designing the learning algorithms and those affected by them. Entities should compete in the marketplace for trust and reputation, face ratings by external monitors, and contribute to the development of industry standards. Trust must be earned.


authors’ note

We thank Christian Lansang, Maroussia Lévesque, and Serena Wong for thoughtful research and advice for this essay.

© 2022 by Cynthia Dwork & Martha Minow. Published under a CC BY-NC 4.0 license.

Endnotes