In the News
|
April 26, 2023

The next level of AI is approaching. Our democracy isn't ready.

By
Danielle Allen
Source
Washington Post
Share

Tech and democracy are not friends right now. We need to change that — fast.

In her latest in a series of columns for The Washington Post, Danielle Allen, Academy member and cochair of the Academy's Commission on the Practice of Democratic Citizenship, assesses the threats that new AI tools may pose to American democracy.

"Social media has already knocked a pillar out from under our democratic institutions by making it exceptionally easy for people with extreme views to connect and coordinate," Allen writes. "We are only just beginning the work of renovating our representative institutions to find mechanisms (ranked choice voting, for instance) that can replace geographic dispersal as a brake on faction. Now, here comes generative artificial intelligence, a tool that will help bad actors further accelerate the spread of misinformation."

Allen underscores that this new technology is not inherently bad, and that a healthy democracy could govern the technology and put it to good use in countless ways. But is our democracy ready to address these governance challenges?

Allen isn't sure. And it is this uncertainty that led her to join a long list of technologists, academics and even controversial visionaries such as Elon Musk in signing an open letter calling for a pause for at least six months of "the training of AI systems more powerful than GPT-4.”

"Regardless of which side of the debate one comes down on, and whether the time has indeed come (as I think it has) to figure out how to regulate an intelligence that functions in ways we cannot predict, it is also the case that the near-term benefits and potential harms of this breakthrough are already clear, and attention must be paid," Allen writes.

As for what we should do in the short-term, Allen recommends scaling up public-sector investments into third-party auditing, so we can actually know what models are capable of and what data they’re ingesting; accelerating a standards-setting process that builds on work by the National Institute of Standards and Technology; and investigating and pursuing “compute governance,” which means regulation of the use of the massive amounts of energy necessary for the computing power that drives the new models.

But beyond developing a firmer understanding of the impacts of these new AI tools, Allen emphasizes that we need to strengthen the tools of democracy itself: "A pause in further training of generative AI could give our democracy the chance both to govern technology and to experiment with using some of these new tools to improve governance."

View full story: Washington Post
Share

Related

Project

Commission on the Practice of Democratic Citizenship

Chairs
Danielle Allen, Stephen B. Heintz, and Eric P. Liu