On the importance of a national AI strategy: interview with Fabien Le Voyer and Karteek Alahari

Date :
Changed on 07/02/2025
The AI Action Summit is taking place in Paris from February 6 to 11, 2025. It is an opportunity for Inria, the coordinator of the "higher education and research" component of France's national AI strategy, to emphasize the importance of public research in the reflection and projects that various international actors must undertake regarding the benefits and risks of AI systems. Fabien Le Voyer, Director of the AI Program, and Karteek Alahari, Deputy Scientific Director in charge of Artificial Intelligence, both discuss the importance and challenges of this summit, as well as the projects led by Inria and its partners in response to it.
Lanzamiento de la iniciativa P16 en 2024
© Inria / Photo B. Fourrier / Lanzamiento de la iniciativa P16 en 2024

Why is this event important?

Fabien Le Voyer: It is important to remember that Inria, along with the French State, organized a first global summit in 2019. Much has happened since then, and this summit will allow an ecosystem of actors—representatives from states, companies, scientists, and societal actors—to come together and discuss the impact of artificial intelligence, its opportunities, the need to regulate the technology, and the needs of the international community, for example, in terms of access to computing or certain language models and data, especially in Southern countries.

To give a concrete example, in one of the five themes of the event (Future of Work, Innovation, Governance, AI for the Common Good, Trust), specifically on trust and the evaluation of AI, the event allows various state actors responsible for identifying the risks inherent in advanced AI systems, the AI Safety Institutes, to collaborate through joint experiments across different countries, share common evaluation methods, apply benchmarks to a set of models, and interpret the results together, with a multicultural dimension, for example, on biases. These experiments are part of the dynamics of the AI Safety Summits launched at the international summit in Bletchley Park at the end of 2023 in the UK, of which the AI Action Summit is a continuation. This international discussion on evaluation and risks is necessary as it allows for progress towards a mutual understanding of the challenges and common frameworks in the long term.

We have been working for over a year on these topics with other public actors with whom Inria has had bilateral agreements for several years: the LNE (National Testing Laboratory), the PEReN (the center of expertise on digital regulation in Bercy, with which the RegalIA project led by Benoît Rottembourg works), and the ANSSI. On January 31, under the direction of the DGE and the SGDSN (the General Secretariat for National Security and Defense), the National Institute for the Evaluation and Security of AI (INESIA) was officially launched. This institute does not have legal personality (no new entity is created), but it will allow us to work together with a common strategy to address all challenges. It will be France's contribution to the network of AI Safety Institutes, with a major difference: INESIA operates within a regulatory framework, the AI Act (or RIA), and does not address only the security challenges of AI. It aims to guide its research and innovation activities in a broader field: that of a set of AI systems (not just generative or advanced systems) that are intended to be regulated by the RIA. In this way, INESIA will provide metrics, methodologies, and evaluation protocols across the entire field of regulatory obligations (Articles 8 to 15 on reliability, robustness, cybersecurity, transparency, etc.). This institute will have several missions: to consolidate a research program led by Inria, within the framework of the Program Agency, and to involve the entire national scientific community in the challenges of AI evaluation, to support the regulator in the implementation of the AI Act, to evaluate the performance and reliability of systems and compare them with each other, creating new benchmarks when necessary, for example, in the French language. The topics to be addressed concern many colleagues, whether in project teams or in functions related to technology transfer or development: many, along with colleagues from other academic actors (universities or organizations), have already been associated with the construction of the Agency's Program, and I thank them for it. It is important that public research be part of an initiative like this because without access to the forefront of science and technology, it is not possible to advance on such topics.

How will Inria contribute to the Summit?

Fabien Le Voyer: Firstly, we can be proud that the Scientific Committee of the scientific event, organized by IPParis (through the Hi-Paris AI Cluster, of which Inria is a member), is chaired by Michael Jordan, who has joined Inria (SIERRA project-team at the Inria Paris Centre) as part of an industrial chair of the Inria Foundation.

At the institutional level, Inria is present in several capacities: as the coordinator of the research component of the national AI strategy (PNRIA), and also as the coordinator of higher education and training actors, Inria plays a leading role in the organization of the summit. In particular, it will highlight the ESR actors in AI, including the 9 AI clusters at the scientific event on February 6 and 7 at the École Polytechnique. We have also initiated public events organized by the AI Clusters, with strong participation from the Inria Centres, during the weekend of February 8 and 9. This is also an opportunity for Inria to emphasize the importance of the national objectives of this national AI strategy, led at its level, and which are at the heart of the Summit's priority themes:

  • In the Future of Work axis, LaborIA, the joint structure between Inria and the Ministry of Labor and Employment, studies the impact of AI on the future of work, particularly by supporting public structures in the implementation of AI systems or analyzing their impact on specific professions. This, of course, is also related to Inria's opening to the humanities and social sciences, a priority of our COMP. Beyond the national level, LaborIA is participating in the international dynamic of the Summit, as an international network of laboratories on AI and work will be launched to drive field research in different countries and qualify this impact.  

  • Regarding Governance, Inria leads the Center of Expertise of a network of three centers conducting work for the Global Partnership on AI (GPAI) in close cooperation with CEIMIA in Montreal and NICT in Tokyo, a major partner of Inria in Japan. This Summit will be an opportunity to bring together widely (state representatives, international experts, and international institutions) around new common projects, such as, for example, on intellectual property and generative AI, or what is called "agentic AI," with more "autonomous" agents.

  • In the Trust axis, Inria, through the National Institute for the Evaluation and Security of AI (INESIA), which we just mentioned, has also contributed to the drafting of a report on the cybersecurity risks of AI and will publish a leaderboard, with the national AI coordination, Hugging Face, GENCI, and LNE, integrating unprecedented benchmarks in the French language. In AI jargon, a leaderboard provides performance comparison results of models. One of the benchmarks will particularly integrate questions from the evaluation datasets of the baccalaureate, with the aim of comparing the performance of LLMs in the French language.

Beyond the objectives of the national AI strategy, many scientists have contributed to initiatives and deliverables for the Summit (it will be very difficult for me not to forget some):

  • In the Innovation axis, Inria is participating in a joint Challenge with our spin-off Probabl to certify the skills of data scientists in public administrations on the scikit-learn software library.

  • In the AI for the Common Good axis, we can mention, among others, the valorization of open archives of free software in the context of generative AI around Software Heritage.

Finally, Inria will also be present in environmental themes, especially with Jacques Sainte-Marie, contributing to the report on aligning hardware technical challenges for energy efficiency and the circular economy, and participating in the public debate on AI organized by the "Court of Future Generations."

What is the Global Partnership for AI (GPAI) and what role does Inria play in it?

Fabien Le Voyer: The GPAI is an international organization now linked to the OECD, whose objective is to provide concrete and actionable projects for the international community in three areas: science, solutions, and standards. It was launched in 2018 with the ambition of building a network of experts similar to the one that has worked for decades in the field of climate, the IPCC/GIEC. We work with international experts to develop federations of communities, as well as to contribute to the development of technical solutions for the international community. Inria leads the Paris Center of Expertise, along with the other two centers in Tokyo and Montreal. The GPAI's centers of expertise direct the deployment of some of these projects: they organize, promote, and federate.

Through this institutional role, and particularly leveraging Inria's historical scientific ties with various international partners, Inria enables public research to be active and not just a spectator in this dynamic, also supporting France's digital diplomacy. The approach is multilateral but also bilateral, as illustrated by the recent creation (in November 2024) of the Franco-Chilean Binational Center in AI, led by Inria Chile.

Beyond the national objectives already mentioned, what are the main results of the research component of the national AI strategy led by Inria?

Fabien Le Voyer: The State entrusted Inria with coordination in 2018, and this also involves leading national objectives, as we just mentioned. Now that the 3IA, and now the 9 AI Clusters, are well established in the national landscape, it is important to also reinforce coordination by Inria to ensure coherence between the national strategy and the strategy of each university site. An AI Cluster is led by a university with a consortium of actors. This coordination role is a significant challenge for the coming years and also benefits from the trusting relationships we have built with universities since 2019, with the Inria Centres at universities. Among the national objectives, that are gathered in the AI Program of the Program Agency, I can mention:

  • Within the framework of the State's industrial policy, the P16 initiative supports the deployment of scikit-learn, one of the most widely used software libraries in the world. It is also being developed through the creation of a spin-off startup from Inria, Probabl, whose public interest mission is to modernize scikit-learn and align it with business practices. P16 also identifies a set of software libraries related to the data lifecycle before AI processing (interoperability, preparation, cleaning, etc.), which will be maintained and consolidated into a sovereign digital commons for AI, complete and aligned with the needs of businesses.

  • The PEPR AI, which Inria co-leads with the CEA and CNRS under the "France 2030" investment plan, is a research program that supports projects and chairs in areas such as trustworthy AI, frugal AI, and the mathematical foundations of AI.

  • The RegalIA project, focused on algorithm transparency and launched in 2020, provides algorithms, data, and software tools to identify the behaviors of algorithms used by major digital platforms (Amazon, Booking.com, Airbnb, etc.), particularly in pricing and offer ranking. It is requested by several public authorities, such as the Court of Auditors, to provide expertise in qualifying the practices of these algorithms, and it also works with the PeREN, the center of expertise on digital regulation in Bercy.

  • To facilitate access to European computing capabilities, especially for companies, Inria has recently consolidated with GENCI the French response to the European call for AI Factories, which aims to centralize a single device for all actors (companies, research actors, administrations) needing access to computing hour quotas. The added value is the ability to access all European computing capabilities supported by EuroHPC, as well as a set of services: in AI, in the cloud, in accessing databases, and in data preparation.

Beyond the national objectives already mentioned, what are the main results of the research component of the national AI strategy led by Inria?

Karteek Alahari: As we have seen, the evaluation of AI is a central issue, explored by various research teams around the world, but without an organized movement. New models come out every month, and if we are not able to evaluate them—in terms of risks, limitations, and security issues—we will not be able to explain to companies or the general public how to use these models and why to use them (or not). Research must be organized because, beyond purely technical questions, AI raises many ethical and regulatory questions. There is also a demand from contemporary societies to understand the challenges of AI, and therefore, a responsibility for us to study the systems, limitations, and social impacts of AI.

Inria, as a public research institute, is a neutral actor that must develop a form of digital trust: we have the vocation to work with all possible actors for the security and knowledge of these models.

Another area where Inria has a strong position is frugality. In recent days, we have seen evidence of the possible frugality of a powerful AI with the launch of DeepSeek. Frugality in data, frugality in calculations, etc. More and more actors, including Inria, are addressing these issues. It is necessary for the academic world and the industrial world to be able to advance with few resources and create efficient models, even if they are modest in terms of computation and data.

What are Inria's modes of action in these two aspects of AI?

Karteek Alahari: One of the main axes of the PEPR AI is precisely the frugality of AI models. Many of our teams work on this topic. For example, the Sharp project (Ockham project-team at the Inria Lyon Centre with the ENS de Lyon, Claude Bernard University, and the CNRS) focuses on designing optimization and learning algorithms that are intrinsically frugal in resources, while mathematically controlling their performance and robustness to modeling errors.

We can also mention the Datamove project-team, in collaboration with the Université Grenoble Alpes and the MIAI AI Cluster, which focuses its research on optimizing data movements for intensive computing. These movements are a significant source of energy consumption for intensive computing and, therefore, a relevant target for improving the energy efficiency of machines.

Other teams work more specifically on embedded AI. At the Inria Centre at Rennes University, the Taran project-team starts from a premise: the key to sustainably improving performance (speed and energy) lies in domain-specific calculators. In this new era, the processor will be enhanced with a set of hardware accelerators designed to execute specific tasks much more efficiently than a general-purpose processor. Taran focuses on designing accelerators that will be energy-efficient and fault-tolerant.

The Corse project-team, on the other hand, addresses the challenge of performance and energy consumption faced by the electronics industry today, where compilers and execution environments must evolve and interact.

Another central theme is the impact of AI models that are available to everyone—citizens, students, etc.—on our societies. This is what the Flowers project-team at the Inria Centre at the University of Bordeaux studies, for example. They work on the use of generative AI tools like ChatGPT in schools: how are they used? What are their effects on learning methods and outcomes? It is necessary to educate about the tools, but also about their limitations and dangers. Deepseek seems fast and frugal, but it also erases requested information about the Tiananmen Square protests. We need research to find better ways to inform the general public, educate on critical analysis of sources, and the biases of AI. And the creation of tools will not be possible without the cooperation of companies.

Is there an international scientific network in AI? How does it develop?

Karteek Alahari: There are already initiatives. The first was CLAIRE (now CAIRNE), a pan-European confederation of about a hundred research laboratories and scientists in artificial intelligence in Europe, where Inria is associated with our strategic German partner, the DFKI. Another network of excellence, ELLIS (European Laboratory for Learning and Intelligent Systems), focuses on fundamental science, innovation, and the social impacts of AI.

These two networks have achieved results: in addition to the public events they have organized, scientific conferences have enabled the creation of several knowledge-sharing communities and an effective network of doctoral students. This is perhaps the most important aspect: we need to reach the young people who will shape the AI of tomorrow.

Another important initiative at the European level is Adra-e, aimed at creating an ecosystem among AI, data science, and robotics actors, for which Inria is in charge of coordination.

In each domain of AI, such as robotics or natural language processing, connections are forged through major scientific conferences, which are crucial. It is impossible to evaluate, model, regulate, or educate without considering the digital ecosystem, both at the national and international levels.

*This interview was originally published on inria.fr.