From December 9 to 13, 2025, the Latin American Summer School on Cognitive Robotics (LACORO) took place at the Universidad de O’Higgins. This academic event was designed to provide access to cutting-edge knowledge in artificial intelligence (AI) applied to robotics and to train the next generation of Latin American talent. Inria Chile actively participated as a sponsor, reaffirming its commitment to talent development and scientific cooperation in the region.
A Bridge for Robotics in the Southern Hemisphere
LACORO, a summer school created in 2020 and organized by the IEEE Robotics & Automation Society and the Universidad de O’Higgins, aims to make knowledge about AI applications in robotics more accessible to students and researchers in the Southern Hemisphere. It also fosters intercultural collaboration within and beyond the Americas.
The event featured lectures and tutorials on AI, robotics, and human cognition, as well as scientific mentoring sessions. Additionally, it strengthened research networks within the region, highlighting and promoting local advancements.
Inria Researchers present their latest advances in AI
During the school, Inria Chile participated with presentations by prominent scientists who shared their expertise with engineering students, master’s and doctoral candidates, and professionals in the field.
Among the speakers was Luis Martí, Scientific Director of Inria Chile, who delivered the talk “The Role of Artificial Intelligence in Climate Change Mitigation” on December 10. He explored the dual role of AI in addressing climate change, both as a tool for understanding and mitigating environmental phenomena and as a challenge in managing its own ecological footprint. He also highlighted initiatives such as OcéanIA, which uses advanced tools—such as physics-informed neural networks and learning with limited data—to unravel the complexity of the ocean and its critical capacity to capture CO2, aiming to break the “vicious circle” threatening its regenerative function in the global ecosystem.
Xavier Hinaut, a researcher within the Mnemosyne project-team at the Inria Center of the Université de Bordeaux in France, is known for his work at the intersection of computational neuroscience and bio-inspired machine learning. His research focuses on modeling recurrent neural networks, especially for language processing and robotics applications.
On December 10, he presented the talk “Reservoir Computing and ReservoirPy Tutorial” in Rancagua, explaining Reservoir Computing (RC), an efficient machine learning paradigm for sequential data, known for its low computational cost and ability to capture temporal dynamics. He also introduced ReservoirPy (Rpy, available on GitHub: github.com/reservoirpy/reservoirpy), a popular open-source Python library developed by the Mnemosyne team at Inria. Built on tools like NumPy, SciPy, Matplotlib, and JAX, Rpy allows the design of both classic and complex architectures, offers learning rules (online/offline), distributed implementation, hierarchical networks, and tools for hyperparameter optimization.
Additionally, on December 12, he delivered the talk "Reservoir SMILES: Towards SensoriMotor Interaction of Language and Embodiment of Symbols with Reservoir Architectures," where he presented his work on a new generation of computational models based on neurons for language processing and production. These models use biologically plausible learning mechanisms grounded in recurrent neural networks.
Xavier Hinaut and Cognitive Language Models
As part of his visit to Chile, French researcher Xavier Hinaut led a new edition of Inria Chile Talks on December 9, titled “BrainGPT: Tailoring Transformers into Cognitive Language Models.”
In his presentation, Hinaut discussed how large language models (LLMs), despite their ability to predict brain activity, differ significantly from the biological functioning of the human brain. He introduced the BrainGPT project, which explores hybrid architectures inspired by Reservoir Computing and Transformers to build more efficient models that require less training data and have greater biological plausibility.
“The idea behind the BrainGPT project is, on the one hand, to try to understand how the brain works. And, on the other hand, to enable large language models (LLMs) to not have to read every book every time to answer a question, but rather to work like a brain, processing each word on the fly,” Hinaut explains.
