Skip to content

The AI Action Summit & Civil Society’s (Possible) Impact

Events, Sustaining the Commons
The Conciergerie, Paris by Mustang Joe is marked with CC0 1.0

On February 10 and 11, 2025, the government of France convened the AI Action Summit, bringing together heads of state, tech leaders, and civil society to discuss global collaboration and action on AI. The event was co-chaired by French President Macron and Indian Prime Minister Modi. This was the third such Summit in just over a year, the first two in the UK and South Korea respectively. The next one is to be hosted in India, with a firm date not yet set.

Creative Commons was invited to be an official participant in the Summit, and given room to speak on a panel about international AI governance. Given our continued advocacy for public interest AI, and on-the-ground work, particularly in the US and EU, to interrogate new governance structures for data sharing, open infrastructures, and data commons, the Summit was an important venue to contribute to the global conversation.

We focused on three things in our panel and direct conversations:

  1. Civil society matters, and must continue to be included. While we may not hold the pen on drafting declarations, or be in the negotiating room with world leaders and their ample security teams, we must continue to (loudly) bring our perspectives to these spaces. If we aren’t there, then nobody is. Without civil society, there can be no public interest. 
  2. The importance of openness in AI. What it means, who benefits from it, and how we think critically about ongoing (dis)incentives to participate in the open knowledge ecosystem.
  3. Local solutions for local contexts, local content, and local needs.

Civil Society Matters

Civil society matters because we represent real concerns from real people. A people-centered approach to AI must inevitably be a planet-centered approach as well, one simply cannot and should not exist without the other.

Included in the civil society contingent at the Summit were also major philanthropic foundations who have long focused on public interest technology. Encouragingly (we hope) they have joined forces with private investment and governments to launch Current AI, a coalition which is advocating ‘global collaboration and local action, building a future where open, trustworthy technology serves the public interest’. The Summit also saw the launch of ROOST (Robust Open Online Safety Tools), which was born out of a conversation at a prior Summit around the absence of reliable, robust, high-quality open source tooling for trust and safety. ROOST adds a critical building block to the open source AI ecosystem as tools to allow anyone to run safety checks on datasets before use and training should (hopefully) result in safer model performance.

But philanthropy is not a business model for something that is set to become ubiquitous public infrastructure at a greater level than is already the case with the internet currently. The investments of philanthropy alone will not be enough to steer the public interest conversation to the top of the action agenda. There must be matching political will and public investment, and we’ll be watching closely for evidence that actions are following words.

Our view is that governments should prioritize investment in publicly accessible AI, which meets open standards and allows for equitable access. These are key drivers of innovation and every sector stands to benefit. Governments can lead the way on investing in compute, (re)training people, and preparing and encouraging high quality openly licensed datasets, to level the playing field for researchers, innovators, open source developers, and beyond.

Openness in AI

Openness in AI continues to be a broad and multifaceted topic: how do we continue to foster open sharing, making it resilient, safe and trustworthy while we’re hearing from our community some examples of creators and organizations choosing more restrictive licenses now, or hesitating to share at all in an attempt to regain agency over how their content is used as training data. Our future depends on protecting the progress of the last 20 years of open practices. The answer does not lie in a misguided shift from CC BY to CC BY-NC-ND. We have to think more holistically.

The CC licenses alone are not a governance framework in and of themselves, but what they represent are absolutely critical components of legal and social norms that support data governance that can serve the public interest.

In the context of data governance, we see our role in helping negotiate preferences for reuse of datasets containing openly licensed works. We need to ensure that folks are still incentivized to participate and contribute to the commons, while feeling their voices are heard and their work is contributing in mutually-beneficial ways. If you are the steward of a large open dataset, we want to hear from you.

Local Solutions for Local Contexts

From CC’s perspective, local solutions for local contexts are where we need to put our energy. As Janet Haven from Data & Society frames it, let’s focus on collaboration for AI governance, rather than striving for a single, global governance structure. One size does not fit all, and even issues that are global needs, like planetary survival, will require very different efforts by country or region. It was rather encouraging to hear examples of “small” language models from across the world, that emphasize language preservation and cultural context. Efforts to record, catalog, and digitize language and cultural artifacts are underway. This is yet another area where we see a need to systematically articulate and clearly signal preferences for reuse, so that local efforts thrive and are respected appropriately.

Where We Go From Here

We heard from many fellow civil society organizations that the tone in France differed markedly from previous Summits in the UK or South Korea. There was a welcome diversity of civil society voices on panels and in workshops, with a steady drumbeat of calls for safe, sustainable, and trustworthy AI. “Open source” and “public interest” were phrases uttered in many major interventions. But aside from us collectively being able to fill a few volumes on how we define these terms anyway (sustainable for who?) the real impact of the Summit will be seen in the ways in which we collaborate from now on.

The political discussions at the Summit focused heavily on the false dichotomy of regulation versus innovation – and yes, the language used heavily fed into the narrative that those are mutually exclusive. Much emphasis on the desire for regional investment (and superiority), while offering global collaboration, was mildly disheartening but also fully expected. Political statements around public interest were repeated but vague. Canadian Prime Minister Trudeau, who emphatically urged everyone to not forget the people, stating that “the benefits must accrue to everyone”. Whether those in power will pay attention to that message is anyone’s guess. Take, for example, The Paris Charter on Artificial Intelligence in the Public Interest, which says all of the right things but lacks in terms of both widespread endorsement and meaningful steps towards implementation.

We are clear-eyed on the fact that AI is here, has been for quite some time, and will not go away. We need collaborative, pragmatic approaches to steer towards what we see as beneficial outcomes and public interest values. While there were glimmers of hope from some who hold legislative and executive power, it’s clear that civil society has a lot of advocacy work ahead of us.

The Summit culminated in countries signing onto a declaration, with notable omissions from the United States and UK. As always, it is once the media cycle moves on where we will see any lasting impact. In the meantime, let’s not wait for another global Summit to take action.

Posted 18 February 2025

Tags