Risk & Responsibility: Social Science Research as a Modern “Anti-Politics Machine”
This essay examines the distortions introduced into research agendas and research design by the effort to avoid seeming “political.” The rhetorical, institutional, and disciplinary operations of the social science research enterprise worldwide serve to divert attention from questions of power and collective accountability to a focus on technical interventions, institutional risk, and individual responsibility. These practices coincide with the specific circumstances of social science research in the Middle East and North Africa—where the exercise of political power in the form of international interference, irresponsible autocracy, civil disorder and violence, and protracted economic poverty and duress is difficult to conceal—to shape research terrains and agendas, and create a particularly, and tellingly, troubled research environment.
The Middle East and North Africa is often said to be too dangerous to study—both literally, because there are unusually high security risks in conflict-affected or authoritarian contexts, and figuratively, because scholars may be penalized for raising uncomfortable questions or producing inconvenient findings. Indeed, some social scientists have been moved to suggest that it is actually impossible to study the region and its impact in the world systematically. In the aftermath of the attacks of 9/11, political scientist Bruce Cumings wrote, for example, that “in its utter recklessness and in difference to consequences, its craven anonymity, and its lack of any discernable ‘program’ save for inchoate revenge, this was an apolitical act. . . . For these reasons, it seems to me that social science can have little to say about September 11.”1
In fact, of course, the attacks of 9/11 were imminently “political” and precisely the sort of event that called for social science research and analysis. Yet, as conflict scholar Jacob Mundy argued in describing research on the Algerian violence of the 1990s,
Terrorism, like genocide, is an intolerable form of violence. As such, it often produced antiscientific attitudes and irrationally reactionary policies. But the antipolitics of terrorism studies and counterterrorism doctrine goes further than that. First, the science and management of terrorism has largely failed to appreciate the political conditions of the origin of “terrorism” in the Cold War. . . . It has pretended to advance the objective and neutral study of an object that is essentially and incorrigibly political.2
The reluctance to confront politics and to favor “objective and neutral,” predictable, manageable, and ultimately “safe” research has discouraged systematic study of the Middle East and North Africa, hindered collaboration among researchers, distorted both popular and scholarly perceptions of the region, and weakened policymaking. Yet the causes and consequences of the conviction that such research is exceptionally risky are rarely examined. As social scientists and policymakers alike have come to redefine political contests as management challenges, shifting both attention and responsibility from power holders to technocrats, they have increasingly identified what resists such management as “unresearchable.”
An antipolitical reimagining of the sources and solutions of conflict, violence, repression, and poverty has taken hold in the social sciences, obscuring notions of power in favor of mechanical characterizations of social and political life that reduce them to collections of data points, event series, and statistical matrices that produce risk-management strategies and technical “fixes.”3 The process of depoliticizing research and policy reflects systematic efforts among academic, scholarly, and scientific authorities and within institutions and disciplines to remove political issues from debate and contestation. Decades of disclaiming political intentions or implications—of what anthropologist James Ferguson, writing about “development,” called the workings of an “anti-politics machine”—seem to have produced a simulacrum of a research enterprise in which students are trained, projects funded, findings published, and articles cited with little regard to broader purposes other than the reproduction of the institutions and disciplines themselves.4
The Middle East is not a propitious focus for a “merely technical” approach to social research, and that is one of the reasons it seems so risky: the region fits poorly into the conceptual frameworks favored by the anti-politics machine. Apparently in permanent crisis, oscillating between the ungoverned territories of failed states and the oppressive surveillance of brutal dictatorships, the Middle East is a perennial and disquieting puzzle. As political scientist Jillian Schwedler noted, “It is hard to find an issue related to the Middle East or Islamic world that isn’t saturated in tense debates about what’s ‘wrong’ with the region, how to ‘fix’ it, and indeed what the world ‘should’ look like.”5
Even ostensibly technical issues like environmental policy are seen through the lens of crisis and catastrophe: according to environmental anthropologist Jessica Barnes, “concerns about water scarcity, food insecurity, climate disasters, and resource degradation play into common associations of the region with conflict, malfunction, and despair. . . . [We] need to move beyond thinking about the environment just as a problem space to consider it as the space in which people are living their daily lives.”6 Yet substantive knowledge about the region is supplanted by discourses of absence: non-democracies, failed states, lack of security, economic scarcity. What is missing stands in for, and obscures, what is there.
Research in which “people are living their daily lives” in circumstances of duress—in autocracies, civil strife, military occupation, extreme poverty, and precarity—is not easy.7 It is often logistically complex and deeply disheartening. Yet the pervasive sense of mystery and menace about the Middle East is also manufactured by the organization of the research enterprise itself. Academics based not only in the autocracies of the region but also in North America and Europe routinely avoid research and teaching about issues that might be construed as provocative: that is, “political.”
The war in Gaza that began with the Hamas attack on Israel on October 7, 2023, illustrated and exacerbated the inclinations of academic specialists to shrink from contesting the framing of the region as a perennial source of crisis and to leave commentary to advocates, pundits, and polemicists. In fact, for many years, disincentives to fulfill what was once deemed a scholarly responsibility—to disseminate findings publicly—had conspired with meager and selective funding to discourage many social scientists from conducting research in or on the region altogether.8 A 2019 survey of research scholars and scientists based in the Middle East reported that more than 90 percent wanted to leave the region and would accept a permanent position abroad.9 The Scholars at Risk Network, which helps find academic positions for scholars fleeing repression, saw a major and sustained increase in applications from the Middle East in the decade after 2010, the year that marked the start of the Arab Spring.10 In 2019, the Middle East Studies Association of North America warned against research in Egypt, presumably assuming that the dangers of work in Libya, Yemen, Saudi Arabia, Syria, Iraq, and elsewhere in the region went without saying.11
Dire as this picture might seem, however, the Middle East and North Africa region was merely a particularly stark and powerful illustration of broader trends in social science research. Many of the causes and consequences of efforts to depoliticize or “de-risk” research—to make it both literally and figuratively “safe”—illustrated a profound transformation in social science in the last half-century. Changes that reshaped the audiences, resources, and agendas of social science in the Middle East and North Africa also reshaped the production of knowledge, evidence, and debate globally. As Ferguson said of his case study, tiny Lesotho made visible “processes that are likely to be present in less extreme cases, but obscured by the haze of plausibility and reasonableness.”12 So, too, the haze of plausibility that surrounds social science research globally today burns away in the Middle East and North Africa, revealing mechanisms that, while they may serve other powerful—indeed “political”—purposes, are ill-suited to the better understanding of human societies, their economies, and polities: that is, the putative purpose of social science.13 The workings of the research enterprise as an anti-politics machine may be more obvious in the Middle East and North Africa, but the region’s story is a cautionary tale for social science across the world.
How does the social research anti-politics machine work? How did support for political purposes and agendas—once deemed a virtue—become a liability for the social sciences? After all, social science arose with the modern state during the eighteenth and nineteenth centuries to serve political purposes. As governments assumed increasing responsibility for the well-being of their citizens, the social sciences blossomed in efforts to learn about, and shape, popular circumstances and aspirations.14 Research on education, health, income, mobility, and many other features of social life sprang up, categorizing, systematizing, and organizing government interventions from schooling to policing, from Progressive Era America to British-controlled Egypt.15
The American adoption of the Humboldtian model of higher education linking research to university teaching in the late nineteenth century located the social sciences in universities. This permitted their practitioners a measure of autonomy from their political benefactors, protected by what became known as academic freedom: the rights to experiment with novel ideas, challenge conventional wisdom, question authority, and foster “critical thinking” in students. Nonetheless, the knowledge produced by social scientists was understood to be directed ultimately to the common good, which is why many governments funded social science research at universities.
In the several decades after World War II, however, the conviction that, in their efforts to promote welfare, governments had become too large, too expensive, too demanding, and too intrusive eroded confidence in what political scientist James Scott called the high modernist conception of the purposes of government—and in many of the associated enterprises, including universities.16 The contraction of the state in favor of the market was heralded as the new solution to the perennial problem of promoting well-being, and it was accompanied by a change in the conception of well-being itself, increasingly defined not as social welfare but as individual freedom. In this reading, welfare was no longer a claim on government service but a release from government intrusion.
The withdrawal of the state from its avowed activist intervention in social life meant that the social sciences lost their chief patron and organizing principle. In a context in which freedom would work magic, the specialized knowledge of the social sciences was not something governments needed, nor needed to fund.17 With the dismantling of the welfare state and the proliferation of cross-border public policy challenges like terrorism and climate change, government pivoted from promotion of welfare to protection against loss: that is, risk management.18
The erosion of public purpose precipitated a crisis of authority in the social sciences, for the organizations that housed and supported them—universities—and for the disciplines through which they were pursued, requiring very different rationales, institutional frameworks, and disciplinary approaches. As politics came to be viewed as a source of devious and divisive interference in the workings of the market or, more recently, the imperatives of security, it came to be seen as a risk needing to be managed, and social science research was fashioned into an anti-politics machine, a mechanism by which to obscure, dilute, and enfeeble the political, ideological, or social rationales that such research was once celebrated for in favor of “technical” interventions.19 As both an “undirected trend and a deliberate tactic,” the depoliticization of social science entailed a collection of tools and instruments appropriately characterized as an anti-politics machine.20
As international development scholar Rajesh Venugopal has argued, these instruments were deployed on several levels: moral authority, institutional policy and practice, and disciplinary focus.21 At the most abstract level, moral authority was assumed by technocrats who cast politics as “a selfish, predatory and divisive force” and asserted their superior status by distancing themselves from “political actors, ideologies, competition, and discourse.”22 Around the world, governments turned from the impassioned ideological commitments of the Cold War to “evidence-based” policymaking with the arrival of “the end of history.”23 In the Middle East and North Africa, autocratic governments abandoned ideological alliances in favor of pacts based on “security” that, like “development,” represented an apparently apolitical set of purposes and prescriptions. This sounded—and was deliberately designed to sound—inoffensive, to “depoliticize” social research and to find, as Ferguson put it, “technical solutions to technical problems.”24
The authority of social science research continued to be buttressed by its association with policy, even as policy was increasingly divorced from the research said to support it. The International Monetary Fund (IMF), for example, asserted that “economic research is a core activity” of the institution and listed more than a dozen research areas on its website, including development economics, economic modeling, international finance, international trade, and monetary policy. Yet there was ample evidence that a “schizophrenic division [had] come to characterize the IMF’s approach to policy research on the one hand and policy practice on the other” and that “the Fund’s theoretical perspective did not shape its practice.”25 Despite the patina of scientific research, the Fund’s advice was often clearly wrong. After the surprise of the Arab uprisings of 2011, IMF managing director Christine Lagarde remarked, optimistically as it turned out, that “the IMF had learned some important lessons from the Arab Spring. . . . Let me be frank, we were not paying enough attention to how the fruits of economic growth were being shared.”26 There was little evidence, however, that IMF policies changed appreciably; several years later, IMF staff still had “a difficult time assessing the impact of IMF policies [on] poverty, equity concerns, unemployment, and provision of social services like health and education.”27
Just as research provided a veneer of authority to economic and development policy driven by interests and ideologies at some divergence from the putative beneficiaries—in the case of the IMF, those of its donor countries—research served a similar role in the ever-expanding domain of security. Indeed “security,” broadly understood, was becoming an evergreen rationale for policy interventions and, perforce, for the technical research that provided their ostensible foundation. Both European and U.S. government and not-for-profit funders typically accented issues that represented cross-border threats—terrorism, migration, climate change, public health—and entailed an antipolitical reimagining of conflict’s sources and solutions, diverting attention from the region’s conspicuous role in the profitable international arms trade, for example, to managing the refugee populations produced by the conflicts it fuels. As sociologist Helmut Anheier pointed out, these kinds of funders increased “pressure on the social sciences to demonstrate impact as a way of justifying their relevance and indeed their legitimacy as recipients of public funds.”28 At the same time, however, “impact” and “relevance” were drained of political content, coming to serve instead as diversions from the growing gulf between research and policy.
The need to depend on funders that quite naturally had their own agendas also eroded the ability of social scientists to autonomously develop and sustain the sort of distinctive research agenda or scholarly profile that characterized much of the best-known social science in Europe and North America. U.S. academics adopted research agendas made in Washington—on issues like counterinsurgency, political order, economic development, and democratization—only to find themselves denied policy impact. The Project on Middle East Political Science (POMEPS), for example, was established in 2010 as a deliberate effort to mobilize social scientists to inform U.S. policymaking in the region. Coming after the failed efforts of American academics to prevent or even influence the initiation and prosecution of the 2003 Iraq War, it “worked to promote such public and policy engagement, with hundreds of academics each year contributing their expertise on the Middle East on publishing platforms . . . and through direct policymaker engagement.” Over the course of time, however, POMEPS leadership came to recognize “a sharp challenge to this model of policy engagement on the Middle East,” as many policymakers evinced “a fundamental disrespect for academic expertise.”29
The second level of the workings of the anti-politics machine appeared in the institutional design and operations of the home institutions of these academics: universities. By the 1990s, “risk management” began to recast complex socioeconomic dilemmas as inoffensive technicalities in “risk matrices,” and the authority of supposedly neutral expertise was superseding the debate that was once the hallmark of knowledge production as a device for resolving disputes. Universities assumed the mantle of arbiters of standards of research, supplementing and often supplanting peer review with institutional review processes while simultaneously developing elaborate self-protective risk-management systems designed to reflect the “predictable, depersonalized, procedural, rules-based” processes of the anti-politics machine.30
The growth of enormous bureaucracies devoted to “research administration” in American universities reflected reliance on increasingly competitive external funding: public funding was now largely project-based and required elaborate reporting; philanthropic support and commercial investment reflected the management practices of the private sector. Although research administrators justified much of the university oversight on the grounds that research conducted under university auspices met ethical standards, in fact, research management was shaped by multiple imperatives, few of which involved ethics. Indeed, institutional concerns with compliance and liability overtook ethics as the purpose of the reviews, and risk management created a new locus of responsibility for research integrity.
Columbia University’s webpage on “Responsible and Ethical Conduct of Research” was hardly unusual. In September 2024, it comprised one sentence—“Columbia is dedicated to the highest standards of research integrity and is committed to responsible and ethical conduct for all those involved in research”—and provided links to mandated training for “certain individuals participating in projects funded by [federal] agencies.”31 The university’s purpose was to ensure compliance with external imposed standards. This “pass-through” role of the university, reflecting external standards of research and locating internal responsibility for their implementation in the individual researcher, epitomized the operation of the anti-politics machine.
The assignment of responsibility for research integrity to individual researchers—those “individuals” expected to complete the “mandated training”—represented a hollowing out of a collective enterprise of knowledge production that left research universities serving as little more than mechanisms for risk management, dispute adjudication, and commercialization of scientific discovery. As political scientist Dagmar Rychnovská observed, “the pressure to regulate new knowledge comes mostly from outside academia . . . and what is perceived as a problem is not the actual knowledge but its anticipated consequences.”32 Researchers were tasked individually with monitoring and mitigating the risks entailed in their research, trying to anticipate consequences and navigating through thickets of changing definitions of what might be deemed matters of “national security.”
By the 2020s, nearly all social science research conducted in universities in Europe and North America was subject to institutional review to assess compliance with some kind of putative ethical standard, and a variety of international models were being adapted in the Middle East and North Africa, where a number of countries established national research ethics guidelines.33 Although universities bristled at the suggestion that they would forbid or prevent any type of research, in fact, the ethical regulation of research gave the university the authority and the mechanisms by which to control the production of knowledge.34 As criminal justice scholars Mitch Librett and Dina Perrone observed, “the complex and bureaucratized process of review offers a serendipitous device to frustrate and deter what is considered a potential threat to an institution’s reputation or access to revenue sources.”35 Soon, many ethics review processes included sections on risk assessment, data protection, data ownership, and national security, as well as judgments on reputational harm. This was particularly salient in international research.36
In the aftermath of 9/11 and the U.S.-led Global War on Terror at the beginning of the twenty-first century, fears of hostile or malicious interference in, or theft of, scientific data, methods, or findings, whether by foreign powers, commercial competitors, or “nonstate” networks of criminals or terrorists, gave rise to new efforts to secure, or to securitize, research. The notion that social science or humanities research might be what natural scientists called “dual use” had arisen in the United States in debates about the role of psychologists who, as CIA contractors at Guantanamo, designed the United States’ “enhanced interrogation” (torture) program, or the Department of Defense–sponsored Minerva Research Initiative, which is described as “supporting university-based and unclassified social science research aimed at improving our basic understanding of security, broadly defined.”37 These projects were widely viewed in the Middle East and North Africa as conflating scholarly research and intelligence gathering, which in turn contributed to creating or exacerbating precisely the hostile environment for research about which the universities claimed to be concerned.38
The consequent legal compliance requirements heightened the regulatory role of universities, creating still greater challenges for social scientists working in the Middle East. Comparative education scholar Dina Kiwan quoted a sociologist who worked in the region:
the number of organizations they put on the terrorist list by the U.S. who happen to be in our region are tremendous—they have Palestinian organizations on the terrorist list, they have Syria’s government on the terrorist list, they have Iran, they have Hezbollah and half the Lebanese population—what am I supposed to do, stop doing research?39
Far from providing protected time and space for scholarly research, or advocating on behalf of independent research programs or agendas, the institutions collaborated in narrowing and limiting the permissible—indeed, imaginable—scope of research. And as Mundy observed, the danger of managerialism is not just that it misunderstands reality by evading questions of power, geography, and history but that it attempts to bring into existence its apolitical understanding of the world.40 The institutions, from universities to funders to publishers and editors, involved in the enterprise of social science research shaped not only the research produced and disseminated but also the development of the disciplines and fields of study.41
Indeed, depoliticization also operated at the third or disciplinary level of the anti-politics machine, in the elevation of quasi-scientific methods and approaches, from statistics and econometrics to rational choice and formal modeling. The appeal of apolitical analysis was enhanced by new technologies that produced “big data” and seemed—at least at the outset—to promise both new sources of information and new methodological techniques that had no apparent political biases or implications; but the trends had begun well before the explosion of digital technologies.42 As public interest in and the authority of social scientists waned, and as universities moved from fostering to regulating research, social scientists struggled to reestablish an audience for their work. Particularly in the United States, they elected to behave as if they were accountable to each other, even if it is not clear who else might care.
This self-referential accountability produced the myriad and not uncontroversial efforts at the turn of the twenty-first century aimed at ensuring data access, replicability, and transparency—features of research of little concern to most audiences but deeply important to investigators aspiring to scientific authority. In 2012, for example, the American Political Science Association (APSA) determined that “researchers have an ethical obligation to facilitate the evaluation of their evidence-based knowledge claims through data access, production transparency, and analytic transparency so that their work can be tested or replicated.”43 A far cry from early commitments to foster good government or enhance public administration, this seemed to suggest that technical issues—replicability, for example, or risk-mitigating data management—were the principal criteria of responsible research.
This focus on intradisciplinary accountability fostered an increasing preoccupation with method and particularly the sort of quantitative methods that produced and utilized replicable data. Critics of this approach recalled psychologist Abraham Maslow’s 1956 quip about the methodologist’s boast: “I don’t know or care what I’m doing, but see how accurately I’m doing it?”44 Nonetheless, methodology became increasingly important in social science training, and quantitative methods from statistics and, as computing power grew, complex data science and data analysis became more widespread. For many social scientists, the enthusiasm for quantitative approaches led to suspicion of other epistemologies deemed “soft,” subjective, unscientific, and—perhaps—dangerously political.
For social scientists working in the Middle East and North Africa and hoping to be recognized internationally both for the quality and impact of their research and the integrity with which they conduct it, this posed formidable challenges. The assumption of the scientific rigor of quantitative approaches ignored both the limits of available data and, equally importantly, the sort of distortions introduced by ostensibly universal definitions of that data. In the first place, the widespread reliance on official data sources concealed the debates about the numbers themselves. Many social scientists uncritically used data supplied by governments to international organizations and international financial institutions like the United Nations, the World Bank, and the IMF, despite the fact that many governments were known to produce and publish inaccurate and misleading statistics. Moreover, United Nations and World Bank data were organized by country and collected by governments—what development scholar Adam Hanieh and others called “methodological nationalism”—that, even when reported accurately, systematically underestimated the impact of the substantial flow of cross-border regional or global labor movements, commercial links, and financial networks, as the debates on growing inequality in the Middle East illustrated.45
But there were also other epistemological critiques. Survey research, such as the much-used and often valuable Arab Barometer, was predicated on a combination of normative commitments and empirical presumptions that were hardly uncontested: that the “respondent” is, and should be, an individual, for example; that “choice” is both a preference and a right; or that “opinions” are, and should be, personal reflections, freely arrived at and expressed.46 In short, survey research presumed a liberal individualism embedded in the method that may not have reflected how public opinion actually developed, or was expressed and acted on. Indeed, as political scientist Susanne Rudolph suggested, it may be a mistake to assume that the individual is “the unit of opinion” when many survey answers are what might be called “crowd-sourced” in families, neighborhoods, and communities.47 Producing the nuanced understanding of cultures, dispositions, or moods that such surveys seek to reveal was in fact a more complicated task than simply pretesting survey instruments.
The privileging of highly technical social science research methods—statistical analysis, surveys and polling, formal modeling, “big data,” “experimental” methods, and the like—also had material consequences, disadvantaging the sort of small-scale qualitative archival and ethnographic research that was more likely to be affordable to scholars operating in the Middle East and North Africa. The focus on what can be captured in quantity and “at scale” thus created and sustained disparities in cross-national status hierarchies, research collaborations, and citation networks. Most social science in North America and Europe unselfconsciously claimed universality, while research in the Middle East and North Africa typically located it in a particular time and place. This divergence was evident in citation practices: Americans and Europeans cited each other; Middle Eastern and North Africa scholars cited Europeans and North Americans. As Kiwan observed, “Such differences in publication and citation practices reify and consolidate the inequities in transnational knowledge production.”48 Deploying highly technical methods was often a way for social science elites to signal their status—a cosmopolitan currency—rather than a source of reliable findings or useful evidence for policy.
Despite—and perhaps because of—the appearance of technical neutrality in these disciplinary practices, the production of indicators and metrics, the design of typologies, and the construction of units of analysis were not seen to be the product of “a political process, shaped by the power to categorize, count, analyze, and promote a system of knowledge that has effects beyond the producers.”49 As a result, the units with which social and political phenomena were recognized often became the standards to which their subjects aspired. They were, as legal scholar Kevin Davis puts it, “destabilizing the line between the normal and the normative.”50
In fact, the normative occasionally appeared nakedly, if inadvertently, as when social scientists celebrated the world’s resemblance to their disciplinary agendas. Successive presidents of the American Political Science Association, for example, defined the domain of—and implicitly the limits of—modern political science: in 1987, Samuel Huntington welcomed a number of new democratic governments, saying “command economies have no use for economists, nor authoritarian politics for political scientists. . . . The development of democracy called forth political science and political scientists. . . . All this bodes well for the future of democracy and the future of political science.”51 A little over a decade later, Robert Keohane declared that “political scientists can only thrive where democracy flourishes.”52 The presumption that the universalism of scientific technique might eclipse the particularism of actual politics was noteworthy. As Susanne Rudolph observed (also in an APSA presidential address), “the imperialism of categories entails an unself-conscious parochialism.”53
This parochialism—including delimiting the “politics” encompassed by political science to the familiar practices and procedures of democracy—contributed to warping the social world and distorted, among other things, comprehension of nondemocratic polities. As Ariel Ahram and Paul Goode observed,
Deliberately and incidentally, features within the disciplines function to impede, conceal, and diminish efforts to accumulate knowledge about particular authoritarian regimes and authoritarianism more generally. They not only stifle the search for answers, but even the formulation of relevant questions. . . . Professional disciplines act as filters; they provide a structure for recalling, creating and ordering certain kinds of social facts, while discarding or obscuring other kinds of observations. Knowledge about authoritarianism is often cast aside, diminished or compartmentalized in such a way as to become inaccessible and thus forgotten.54
All aspects of the workings of the anti-politics machine—authority, institutions, and disciplines—shaped the definition of social research in ways that diverted resources, attention, and audiences from some of the most important features of social life in the Middle East and North Africa. As research grew increasingly to serve as a facade of legitimacy rather than a supply of evidence in support of policy, university risk-management practices often came to entail little more than policing research, contributing to the creation of what Kiwan called “forbidden knowledge”: that is, knowledge that is “too sensitive, dangerous or taboo to produce.”55 Avoiding politics amplified technocratic claims to authority, expanded risk management to encompass research regulation, and subtly but discernably narrowed the scope and methods of social science (including limiting politics to “democracy”).This both constricted the questions posed about the Middle East and North Africa and crippled the methods used to address them.
The anti-politics research enterprise produced little positive knowledge about the Middle East and North Africa and many surprises, as the bewilderment that attended the uprisings in the Arab world in 2010–2011 illustrated. Melani Cammet and Isabel Kendall surveyed English-language political science journals over the first two decades of the twenty-first century and concluded that:
The proportion of MENA-focused articles has increased, particularly after the 2011 Arab Spring uprisings, but remains strikingly low. With respect to topics and methods, research on the Middle East is increasingly integrated in mainstream political science, with articles addressing core disciplinary debates and relying increasingly more on statistical and experimental methods. Yet, these shifts may come at the expense of predominantly qualitative research, and primary topics may reflect the priorities of Western researchers while underplaying the major concerns of Middle Eastern publics.56
They attributed the “marginalization” of scholarship on the Middle East and North Africa to the usual suspects: the absences, failures, and crises with which we began, “limitations on data collection and generation arising from the region’s large endowment of authoritarian regimes, which restrict access to information; the high prevalence of violent conflict, which limits the ability to conduct fieldwork and undercuts institutional efforts to catalog data; and the high requisite investment in language skills to study the region.” They also suggest in passing, however, that there may be something about the discipline itself that shapes audiences for research on the region: “The fact that most ‘big’ research questions in political science have emerged from the experiences of advanced industrialized countries in the West also has limited the perceived contributions of findings from the region to the discipline.”57
Indeed, not only the questions themselves but also the datasets and survey instruments designed to address them reflected a presumption of universal comparability that imposed analytical blinders. Widely used typologies of, for example, regime type drew on preexisting frameworks that did not reflect local circumstances, and scholars struggled to determine “what is this a case of?” The puzzlement over the Arab upheavals after 2011 was indicative of the poverty of such templates, as social scientists debated whether these were revolts, revolutions, democratic transitions, coups, or authoritarian upgrading; whether the appropriate historical analogy was 1989, 1919, or 1848.58 Political scientist Rabab El-Mahdi showed that much of the research on protest and contestation accented the dramatic over the quotidian, “singular events over dynamic processes,” momentary and sporadic episodes rather than sustained trends, thereby missing subtler patterns.59 As international affairs scholar Gregory Gause pointed out, “The vast majority of academic specialists on the Arab world were as surprised as everyone else” and concluded that “as paradigms fall and theories are shredded by events on the ground, it is useful to recall that the Arab revolts resulted . . . from indigenous economic, political, and social factors whose dynamics were extremely hard to forecast.”60
Gause’s self-criticism, like LaGarde’s admission that the IMF had gotten the impact of their policies wrong, was unusual: failures to predict, or even explain, the economics and politics of the Middle East and North Africa were rarely acknowledged. Instead, responsibility was further shifted, this time from the researcher to the subjects themselves, who failed to fit the profile of the standard respondent or the conventional typology: the pathologies of the Middle East—the failures, absences, disasters, and malfunctions—became both question and answer in this parody of research.
It was no small irony that the anti-politics machine that undergirded social science research was designed to produce and sustain ignorance about the questions of power, conflict, hierarchy, wealth, identity, and justice that were at the core of the social sciences themselves. In fact, however, as sociologists Matthias Gross and Linsey McGoey pointed out, “ignorance is not a motionless state. It is an active accomplishment requiring ever-vigilant understanding of what not to know.”61 Whether it is painful episodes of the past, such as Turkish denial of collective violence against Armenians, or more contemporary instances of brutal, cruel, and exploitative actions by the U.S. government and American allies in Palestine, Yemen, Libya, and elsewhere, the manufacture of ignorance takes nearly as much work as the production of knowledge.
The antidote to ignorance is, of course, knowledge, and both are produced in ongoing processes. Indeed, as Scholars at Risk observed in their comments on the campus protests over the Gaza war, “academic freedom is foremost about processes, not ideas, specifically processes that promote truth-seeking and transmission of knowledge. . . . The university especially has an affirmative obligation to promote the widest expression of academic freedom, insofar as this leads to the greatest quantum of knowledge and truth-seeking.”62 Put differently, as biologist Stuart Firestein said, “we must become comfortable with teaching that science is not an accumulated pile of facts but an ongoing set of questions.”63
For scholars of the Middle East and North Africa, openness to questions can be particularly important. What attorney and writer Kenneth S. Stern calls “the conflict over the conflict”—the controversies over Israel and Palestine on campus in the United States—represents a powerful example of how the boundaries of knowledge were often simultaneously created, maintained, and renegotiated through controversies.64 In a November 2023 survey of Middle East scholars, more than 80 percent of respondents said they felt the need to self-censor when discussing issues related to Israel and Palestine. Six months later, the survey authors, Marc Lynch and Shibley Telhami, reported that “respondents to our survey share a wide range of accounts of talks and events being canceled, institutional pressure to be silent or cautious, and appalling campaigns against them by external actors.”65 Well before the beginning of the Gaza war in 2023, however, scholars lamented the shrunken space for research on the conflict over Palestine. As political scientist Nathan Brown observed, “In the United States and abroad, government and private actors are working together—sometimes in tandem but many times in concert—to set the terms for permissible debate and discussion in workplaces, classrooms, boardrooms, and the public square, hindering the development of sound U.S. foreign policy on Israel/Palestine.”66
If the authority of the university is to be located not in its compliance with government regulations and funders’ mandates but in its commitment to the accumulation of knowledge and the pursuit of truth-seeking, the operation of its research administration, including its ethics review processes, should reflect the fact that “research design is a continuous process rather than being fixed at the start.”67 Disciplinary associations should temper their calls for preregistration of research protocols with recognition that the process continues not only in the field but over generations of scholars. And social researchers themselves should reflect on why, apart from the satisfaction of personal curiosity or the accomplishment of personal career aspirations, they conduct research at all. Failure to consider the intellectual traditions and political contexts in which they work—the uses to which their findings and interpretations are put—make social scientists little more than technocratic tools in the hands of those who utilize them for their own often partisan, and indeed very political, purposes. Researchers’ aims may be varied—to improve social service delivery, enhance government accountability, strengthen national power, promote democracy, counter violence, build state capacity, reform a security sector, even create and disseminate new knowledge—but they have them, and they have an ethical responsibility to acknowledge their ends and see that their means are consonant with those ends. The questions asked, the methods used, the collaborations fostered, the audiences addressed were all shaped and ultimately justified by their purposes. Research can be many things; as comparative politics scholar Stacey Philbrick Yadav reported, for example, her Yemeni collaborators saw research itself as working toward justice, giving voice to the unheard, and documenting the overlooked.68 Such a commitment represents a very different conception of accountability and ethical obligation than simply meeting the requirements of data access and analytical transparency. Without it, social science research is destined to be hobbled by the mechanisms of an anti-politics machine whose workings profoundly distort not only knowledge of the Middle East and North Africa, but understanding of the social world everywhere.