Future Problem-Solving: Artificial Intelligence & Other Wildly Complex Issues
Imagine a bright future for philanthropic and government problem-solving. There is a version of the development of artificial intelligence, open datasets, and equitable philanthropic practices that could enable societies to solve their most complex problems much more effectively than is possible today. Philanthropy has been shifting from a model of charitable giving toward support for systems-level change. In recent decades, new digital technologies have largely served private ends, such as wealth creation and industrial efficiencies, rather than the public interest. Datasets are too often held in private hands for proprietary ends. These factors could converge in a more positive direction for a wider array of humans. This future will not come about on its own if left to private markets alone. But with planning and foresight, a brighter future for the climate, international peace, economic inclusion, and other broad societal goals is within reach.
Imagine a future, decades from now, when solving humanity’s great problems through collaborative, systems-level change is possible. Take the ravages of climate change: a global problem with a complex array of potential and actual causes, with harms experienced unequally across areas and populations, with an extremely broad range of possible ways to go about addressing it, and with significant political, economic, and technological obstacles to doing so. For instance, the harms from climate change are felt disproportionately by the world’s poorest people, while, overall, the wealthiest cause a larger share of the problem (say, by consuming the greatest amounts of energy and food) and experience the fewest of the costs (living in climates in the Global North that are far from eroding coastlines, and with the wealth and privilege to adapt to the warming planet).
Today, in seeking to address the climate problem, government actors and their partners in philanthropy and civil society cast about for solutions that cover a wide swath of industries—energy, transportation, agriculture, manufacturing, and so forth—and that call upon methods including public policy, economic incentives, technological innovation, consumer behavioral change, community organizing, geopolitical wrangling, and impact investing.
In this bright future, a technological system could help civic-minded actors devise and rank possible solutions to climate change by likelihood of success, relative costs and benefits felt by different communities, and time to enact. When a philanthropist or policymaker seeks to determine where to invest, this simple, publicly available resource can clearly identify and easily reach the parties that are best positioned to implement these solutions. The range of actors covers the full breadth of the population, not just those with connections to people in power. The needs of the communities most affected—historically, at present, and in the likely future—can be recognized and addressed. Philanthropists and policymakers can understand prospective market actions and build them into economic modeling, which market actors can incorporate into their own modeling. Those funders, policymakers, and investors devoted to equitable approaches to problem-solving have easy means to enact their strategies and reliable means of accountability. Climate change could be addressed with the minimum cost, the greatest degree of community engagement, and the strongest likelihood of reducing harm now and in the future.
As a second example, consider the challenges posed by nuclear proliferation. The existence of nuclear weapons and the expansion of access to more nuclear weapons by more global actors have posed a long-standing existential threat to humanity. And given the myriad global, regional, and local interests and fears of both state and nonstate actors the world over, the most effective methods to address this type of harm can be hard to figure out. Investments in nonstate actors and researchers who work for peace and security are worthy expenditures. Procedural approaches such as Track II dialogues, which bring together nonstate actors and allow for crucial information-sharing when state actors are not talking officially and directly about the key issues, are another good idea among many. But the need to prevent nuclear catastrophe remains real and pressing. The risks demand that more be done. What should those investments look like, when the primary decision-makers are in positions of authority that are hard for these nonstate actors to reach effectively?
Note, too, that these two issues intersect. One of the potential solutions to the climate crisis—debated, for sure, but squarely on the table—is to increase the global availability of nuclear energy. To what extent would a major push to increase nuclear energy cause a greater risk of harm, whether by accident (such as at commercial nuclear reactors) or due to knock-on effects in the security regime? And, conversely, how might the effects of global warming, including forced human displacement, intensifying competition for resources, and loss of livelihoods, contribute to geopolitical instability, increasing the risks of conflict between nuclear powers?
In this future world, technology systems would demonstrate and parse such interactions between the climate and nuclear fields much more clearly. While predicting the future with a high degree of certainty would be implausible, technology-assisted analysis could make the trade-offs and the likelihood of one outcome or another clearer. The safety and security concerns associated with nuclear energy would be quantifiable; it would be possible to contrast these costs to the potential benefits to the climate and economy. The likely effects on certain populations and geographies would flip from invisible to visible.
Is this vision of the future a pipe dream? No such system exists today to inform policymakers, philanthropists, academics, advocates, and others who seek to address these wildly complex and interconnected problems. Perfection of this sort is likely illusory. And a single system to make, or even inform, such decisions by governments and others might be a dangerous approach anyway, not to mention states’ inevitable distrust of a “black box” system (likely developed by the wealthiest nations) that advises against their interests. But the potential to build knowledge and information systems to help improve the odds of getting these decisions right—to improve the likelihood that humans could make such systems-level decisions well in the future—is real. It would take intentional investment and careful planning to add such systems into the mix of the possible for our future.
The fields of philanthropy and technology will face major turning points in the coming years. These opportunities for change offer the potential to address systemic inequality, to improve the effectiveness of philanthropy, and to bring about brighter futures for more people throughout the world. What might we do now, today, using the tools we have in philanthropy, to address past harms and usher in a more equitable future? The goal, of course, is simple: to ensure that philanthropy does more good than harm as we shape—and envision—the future today.
Philanthropy, at its best, is fundamentally about futurism. Philanthropists, in partnership with communities, should imagine and invest in a future that is brighter than the present or past. The goals of futurism and philanthropy are linked.
But not all giving looks to the horizon. As societies, we must provide crucial funding for people to address current-day needs, which often receive the largest outpourings of charitable support: the relief needed, say, after a natural disaster when people in a community do not have clean water to drink or a roof over their heads. These needs are more pressing than ever, as, at the time of writing in spring 2025, government funding for basic human support is falling. But these necessary approaches to giving are more linked to charity and to the role of the state than to what we might think of as systems-change philanthropy.
The real opportunity posed by philanthropy is to make and sustain investments that will change the course of history over time in a positive direction, not simply to fill gaps left by market failures and government funding shortfalls. Philanthropists are in a position to put “patient capital” to work for good over the long haul. Otherwise, it makes no sense to offer tax incentives for people to give, such as through large endowments set aside for perpetual spending, as some donors prefer. It would be much more efficient simply to tax the income and use it to meet current needs in the most direct fashion possible. The tax system in the United States is premised in part on the concept that the benefit of avoiding taxation on income encourages philanthropy and, in turn, that philanthropy makes change possible by drawing upon and supporting changemakers who are not employed by the state.
This future orientation to philanthropy invites us to critique the status quo, imagine a better future, and harness the private sector toward that change. Most of the time, these philanthropic investments fund actors in nonprofit spaces. Philanthropists commonly support those in academia to pursue a course of study or carry out research, as well as activists and movement leaders to advocate for change. In effect, these investments typically supplement government actors where it makes more sense to draw on outside talent or resources, redistribute wealth and power to historically marginalized communities, or help meet other ends that the state and market are not accomplishing. This practice of futurism requires a nuanced understanding of how change comes about, the skill to identify problems that philanthropy can address (as compared with private markets or government actors, for example), and the networks of people and institutions that can bring to life new ideas and approaches.
At many large philanthropic organizations, the typical strategy is to invest financial and other resources in those people with the most creative ideas, great passion for what they are doing, and hard problems to tackle. These institutional approaches are not wrong; they can be very effective. That is the core premise of the MacArthur Fellows program, for instance. Investments in creative individuals who carry out life-saving research, dream up and produce arts and culture that inspire and enliven, and offer greater student access to education and other life-giving opportunities—there are plenty of essential investments in people and ideas that plainly redound to the benefit of many, perhaps even for all.
But as philanthropic leaders look to the future, they must also acknowledge that there are plenty of philanthropic practices that have done more harm than good. The list of bad philanthropic practices is long; it has inspired full-length books as well as social media sites.1 The worst of these practices perpetuate uneven and unjust power imbalances, in turn reinforcing advantages in society afforded by unearned wealth and status. Other philanthropic practices, often termed “strategic philanthropy,” use large amounts of capital to support ideas and approaches to policy problems that harm communities more than help them. Philanthropy has not been an unalloyed good throughout history—far from it.
This is a time for looking ahead. The rapid development of new information technologies that has characterized the last several decades continues apace (if anything, the pace of change is accelerating). There is a role for philanthropy to play in at once shaping new technologies and applying those technologies to philanthropic practices for the public good.
Charitable practice in the United States existed long before philanthropy was formalized as a sector, dating back to the American colonial period.2 Early American philanthropy consisted mainly of unsystematic donations, but later became organized by ethnic and religious organizations. George Peabody and his contemporaries critiqued the charitable practices of their time as being unorganized, palliative, and parochial.3 Andrew Carnegie challenged the wealthy of his generation to donate most of their wealth during their lifetime and enable the “worthy” to help themselves.4 Carnegie and his peers formed the earliest charitable foundations in the United States. The model and philosophy that Carnegie championed remain pervasive in philanthropy today, though a field of healthy critique has emerged to offer new, more future-oriented models and methods.5
The principal shift in the philanthropic sector is the move from a field oriented toward charity toward one that imagines and supports greater equity and justice. Darren Walker, president of the Ford Foundation, encourages philanthropies to devote time and money into dismantling the systems that generate and perpetuate inequality.6 The USAID donor statement on locally led development—now a vestige of the past, given the 2025 attacks on USAID’s funding streams, infrastructure, and website—once took aim at the “philanthropists know best” tradition.7 The MacArthur Foundation, where I work, and many of its peers increasingly hire program officers who are intimately connected to the work, creating within their staff a constructive mix of subject matter expertise and lived experience to inform decision-making on grants and other investments. But Walker’s and his progressive-minded peers’ views about the goals of giving are far from universal; the field of philanthropy, as established today, is inherently pluralistic. As in the case of elected officials, donors represent a wide range of points of view about the direction policies should take over time. This pluralism is one of the field’s great strengths.
Some philanthropists are pushing the model further, advocating for deeper and more systemic change in the way that giving takes place. Many leading institutions are embracing participatory and trust-based philanthropy.8 Philanthropies also increasingly recognize that they cannot—and should not—do their work alone. Groups of donors are working together to tackle the hardest problems. Large-scale collaborative efforts have come together to address climate change, such as the Global Methane Hub and Invest in Our Future, as well as threats to democracy, such as More Perfect and Press Forward.9 Other promising and innovative reforms create new models for the wealthy to share their resources through competitions and pooling of funds, such as Lever for Change, an affiliate of the MacArthur Foundation.10
Artificial intelligence (AI) is poised to become one of the most transformational technologies in human history. It could potentially revolutionize every aspect of our lives—from how we learn and work to how we live, govern, and communicate with one another. Once fully operationalized, it could become a fundamental and ubiquitous application in education and workforce development, health care, government, biotechnology, defense and national security, finance, and most other fields. We are already experiencing its effects in a number of these domains today.
“Technology is neither good nor bad; nor is it neutral.”11 As we publish this issue of Dædalus, one of the biggest unknowns, both for society at large and the field of philanthropy, is exactly how the future of artificial intelligence will develop. Experts disagree on this question—sometimes, they disagree a lot. Those devoted to the exploration of what is today called “ex risk,” short for existential risk, perceive that AI could bring about the end of human existence in a short window of time. Others argue that this generation of AI tools will usher in the singularity, a blissful phase of human existence marked by far fewer threatening problems and all sorts of new opportunities. Most close observers and participants in the development and shaping of AI seek to bring about a positive future in which the technology does more good than harm.
The speed of AI’s advancements and deployments means we will face these changes very soon—and some of them are already upon us, affecting human lives today across the globe. We are at a critical stage in AI’s development, which gives us a chance to shape its future to ensure that the benefits are applied for good. We have an opportunity to apply a sociotechnical lens to the design and application of these new technologies as they materialize and come to market. These interventions, in turn, can have a positive effect on the lives of billions of people.
When the internet was commercialized in the 1990s, the occasion to ensure that its benefits were shared in a truly inclusive fashion was missed. Policymakers in the United States, where the technology was principally developed and deployed, failed to create mechanisms to protect the internet from misuse. Instead, the United States set itself on a course of more than three decades of inaction and a laissez-faire approach that has served some individuals extremely well, but disadvantaged many other large communities and even countries. Among other failures, we have not ensured the representation and voice of those most marginalized around the world in the development of these new, society-shaping digital tools.
Today, there is a window of opportunity to learn from those mistakes and ensure that civil society worldwide is actively represented in shaping the future of AI. Some of the building blocks for a very different technology policy regime for the AI era are in place, such as the U.S. Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.12 The promise of these improved approaches will not be fully realized without substantial coordinated and well-resourced engagement by civil society.
While AI has brought about many benefits, it presents a variety of substantial dangers in both the near- and longer-term. These new technologies can perpetuate bias, with adverse effects for communities that are already marginalized.13 The governance models for containing and shaping new technologies are at such an early stage as to be ineffective, allowing these harms to go unchecked.14 Rather than using the sweeping advancements in technology to address systemic issues, the tech industry is promoting a technosolutionist narrative.15 This profit-driven narrative benefits those at the top of the tech world, who then bring their for-profit and technosolutionist ideologies to philanthropy.16
There are two principal actors engaged in the development and management of artificial intelligence: the tech industry itself and a patchwork of government actors around the world. The tech industry is focused primarily on business interests that may or may not address issues critical to civil society. Even as they grapple with anticipating and understanding the depth and scope of the changes AI may bring, governments bring an oversight and regulatory lens to their work to constrain new technologies. Government actors too often do not have the technical skill or know-how to shape AI’s development effectively. The few who are involved in this consequential regulatory development and implementation process are concentrated in a small handful of companies and states on the global scale. The most powerful states are those most likely to have the greatest ability to shape these technologies: largely, at the moment, China, the EU states, and the United States. The development, governance, and management of artificial intelligence is far from equitably allocated across the globe.
Civil society writ large also has too little representation and input in how these technologies are developing. There is a need to invest in civil society’s voice when it comes to the architecture of the technology, the use and control of data, and the economic benefits that flow from artificial intelligence. There are key downstream uses to address as well, such as the way it will be deployed in teaching and learning, democratic decision-making, health care, the justice system, workforce development, and climate change mitigation, among others. None of the nonprofit actors in these fields have the remit, the power, or the resources to help shape this crucial aspect of the future.
One thing that most observers agree on: AI, if governed and developed effectively, presents opportunities for all sectors of society. That includes philanthropy. For this essay, I set aside the very broad range of potential areas in which we could invest as philanthropists and focus on this future-oriented topic: how to help shape the development and direction of artificial intelligence. I do not engage the existential questions associated with the technology; those are worthy of attention and are well-covered elsewhere. Nor do I linger on the questions of concentration of power in the hands of a very few states and companies, as pressing as those issues are. My focus here lies instead on the narrow question of how beneficial use of AI could bring about a brighter future through philanthropy.
Philanthropy can change this trajectory through collective action. The future will be much brighter if philanthropy empowers civil society, as well as members of the communities directly affected, to have a greater voice in the development and governance of new technologies. This goal has been elusive in technological circles in the recent past, as the internet has reached global scale with disproportionate power vested in a small number of corporate actors, largely based in the United States and among other Western powers. Instead of continuing our mistakes of the past, civil society, technology developers, and philanthropies can work together to make the spheres of AI and philanthropy more collaborative, effective, and equitable.
The good news is that there is no shortage of opportunities to begin this coordinated effort: namely, by investing in the people, organizations, and movements working toward such a future. A number of nonprofits and academic institutes, such as the Distributed AI Research Institute, the Data & Society Research Institute, TechEquity, and the Network Startup Resource Center, perform research and policy advocacy that drives toward an equity-focused, solutions-oriented AI environment.17 The MacArthur Foundation’s Technology in the Public Interest program supports research, policy development, and practice that aim to uphold public interest considerations in the development and governance of AI. And in 2023, MacArthur joined with nine other philanthropies in committing to a $200 million initiative, led by then-Vice President Kamala Harris, to support AI development while protecting and supporting workers, human rights and freedoms, and the development of norms and rules around this burgeoning technology.18
A number of MacArthur Fellows from the past several years also work in the AI space. Cognitive scientist Josh Tenenbaum, class of 2019, applies his deep understanding of human cognition to the way that AI and machine-learning models are built, with the goal of bringing these technologies closer to the way that the human mind operates. Safiya Noble, a 2021 Fellow and an internet studies and digital media scholar, uses her research to demonstrate biases within search engines that reflect oppressive and discriminatory attitudes across race, gender, and culture—an issue that many critics raise as among the technology’s most dangerous. And 2022 Fellow Yejin Choi uses her expertise on natural language processing to develop AI systems based on commonsense reasoning models and implied meaning rather than rigid, logic-based probabilities. These are just a few examples; the good news is that there are plenty more, across a broad spectrum of fields and perspectives, pulling in a similar direction.
But it is not enough for philanthropy simply to fund the people and institutions, excellent though they may be, working on these promising and extremely risky new technologies. We must champion both effectiveness and equity, with an eye toward their use on behalf of the public good—and we must turn that eye inward as well. What else can we do to invest in the development of these technologies? In turn, can we use the technology itself in our approach to philanthropy? The initial investment might itself benefit the practice of philanthropy, if we get it right, thus making us more effective as we seek to shape future technologies.
A major question around the responsible use of AI is that of governance: Who should own, wield, and steward these technologies, and for what purposes? One initiative grappling with these questions is the Philanthropy Data Commons (PDC), a sector-wide governance and technical infrastructure established in 2021 by a group of funders (including MacArthur), civil society actors, and others working in the philanthropic space that explores and enables responsible data sharing and use in philanthropy.
The PDC aims to reduce the power imbalance between philanthropies and those we fund by democratizing data-sharing, managing data as a sector asset, lessening the burden on those seeking funds, and ultimately creating a bridge to more equitable access to funding. Today, it is supporting work to connect proposal and grant data across otherwise disconnected platforms and systems in the philanthropic sector, which will help reduce errors in data and make it easier for funders and grantseekers to find each other.
The PDC envisions its work as eventually shifting the way that funders engage with organizations seeking grants. This open, shared data platform, and the collaborative governance principles undergirding it, could become a new infrastructure in philanthropy that facilitates more-effective and more-efficient ways of working together, leading to and creating more equity and inclusion in philanthropy. The PDC has the potential to change how we all work in philanthropy by:
Enhancing the sharing and applicability of data and information;
Enabling system interoperability for using and thinking about data;
Reducing administrative burdens and costs for funders and organizations seeking grants; and
Reducing the power imbalance between funders and organizations seeking grants, or even eliminating or inverting it.
Ideally, the PDC could offer philanthropists everywhere a public-interest dataset useful for research, analysis, and problem-solving activities. It could lead to the most effective solutions, those rooted in communities, receiving much more philanthropic support, unmediated by preexisting relationships and access to philanthropic power.19
It is instructive to explore the ways in which the fields of philanthropy and technology development have given rise to related critiques. Both concentrate a great deal of wealth in the hands of the few; both are relatively unregulated. But there are crucial differences too. Philanthropy lacks the profit motive so often driving the technology sector at its core, and certainly involves a lot less money than the capital involved in driving this new era of AI around the world. Philanthropy has a clear opportunity to build on its recent progress toward collaboration, and in turn can influence and encourage technology developers to do the same.
For instance, funders can devote resources to the collection and analysis of unbiased and robust datasets for both philanthropic and tech organizations. The world would be different if large, open datasets could be accessed at low cost by civil society actors, provided that they incorporated constraints to limit the dangerous uses of the same technologies. Recall the example of climate change, which posited that an open-source dataset, comprising various actors, methods, and geographies, could be used to identify and enact solutions to climate issues around the world in a fraction of the time it takes today.
Philanthropies can also fund organizations that conduct research and provide equity and ethics training for the technology sector. Tech leaders and developers can be trained to incorporate equity and ethics concerns into their work and develop their products with the goal of long-term societal benefits rather than short-term profit goals.
These examples are broad. More precise examples can illuminate this point further. Consider machine translation projects for languages spoken by small populations that are in peril of becoming “forgotten” when the last of their speakers pass on. Many populations around the world communicate only, or principally, in their Indigenous languages. Even if enough people speak the language for it to persist, populations can be rendered unable to access governmental processes, the formal economy, and digital resources that are available only in “dominant” languages. This is especially true in many parts of Africa, where many countries’ official language is English despite large portions of their populations being unable to speak it, and in India, where the MacArthur Foundation gives funding and operates programs. The Masakhane project, currently underway across the continent, uses artificial intelligence for language translation and vocabulary development. Similar translation solutions are popping up with support from Nilekani Philanthropies, Mozilla, Google, and other technology firms, though there is potential for conflict of interest, perceived or real, in the work of developing datasets that may be used in a proprietary way and not shared broadly.20 The effects of being able to bring potentially millions of people into civic, political, and cultural spaces in which they previously did not have a voice could be tremendous—and philanthropy can play a large role in making it a reality. Consider also the opportunities for teaching Indigenous languages to the next generation: these models may help preserve languages that have very few people still speaking them. This topic is far more complex than this paragraph would suggest, yet the opportunities for using machine learning tools for language access, acquisition, and preservation are plain.21
Finally, philanthropies can help build a robust tech policy ecosystem by funding and convening collaborations of scholars, organizers, artists, and community leaders. Philanthropists should be sure to take advantage of our immense social capital to underscore the importance of early collaboration and learning among otherwise siloed communities that can share what these new technologies mean for their work, their lives, and their expectations for the future.
Philanthropy can—and should—seek to help shape technologies for the good of humanity, rather than for profit. If we do not intervene in the public interest, we may find ourselves being haunted by this missed opportunity for a brighter future. Our previous approaches to investing in and governing new technologies have left too much power in the hands of too few. The harms associated with a laissez-faire approach in an era of artificial intelligence, as compared with the previous digital technologies, may be far greater. Promises by the tech industry, from the mid-1990s to today, to self-regulate and include community members in their growth and design have not come to fruition, but they can serve as a sort of reverse roadmap for how to imagine and design the next phase of technological change. We know what will happen if a laissez-faire approach predominates.
We need to learn from this past quarter-century and design a better, more public-interested approach for the decades to come. This moment of inflection allows us to use futurism to guide today’s investments, to remind ourselves that we can embed greater equity into the technology world, and to recommit to philanthropic practices that help to build a safe, sustainable, and just world.