Roadmap for animate matter published on Journal of Physics: Condensed Matter

The three properties of animacy. The three polar plots sketch our jointly perceived level of development for each principle of animacy (i.e. activity, adaptiveness and autonomy) for each system discussed in this roadmap. The polar coordinate represents the various systems, while the radial coordinate represents the level of development (from low to high) that each system shows in the principle of each polar plot. Ideally, within a generation, all systems will fill these polar plots to show high levels in each of the three attributes of animacy. For now, only biological materials (not represented here) can be considered fully animated. (Image from the manuscript, adapted.)
Roadmap for animate matter
Giorgio Volpe, Nuno A M Araújo, Maria Guix, Mark Miodownik, Nicolas Martin, Laura Alvarez, Juliane Simmchen, Roberto Di Leonardo, Nicola Pellicciotta, Quentin Martinet, Jérémie Palacci, Wai Kit Ng, Dhruv Saxena, Riccardo Sapienza, Sara Nadine, João F Mano, Reza Mahdavi, Caroline Beck Adiels, Joe Forth, Christian Santangelo, Stefano Palagi, Ji Min Seok, Victoria A Webster-Wood, Shuhong Wang, Lining Yao, Amirreza Aghakhani, Thomas Barois, Hamid Kellay, Corentin Coulais, Martin van Hecke, Christopher J Pierce, Tianyu Wang, Baxi Chong, Daniel I Goldman, Andreagiovanni Reina, Vito Trianni, Giovanni Volpe, Richard Beckett, Sean P Nair, Rachel Armstrong
Journal of Physics: Condensed Matter 37, 333501 (2025)
arXiv: 2407.10623
doi: 10.1088/1361-648X/adebd3

Humanity has long sought inspiration from nature to innovate materials and devices. As science advances, nature-inspired materials are becoming part of our lives. Animate materials, characterized by their activity, adaptability, and autonomy, emulate properties of living systems. While only biological materials fully embody these principles, artificial versions are advancing rapidly, promising transformative impacts in the circular economy, health and climate resilience within a generation. This roadmap presents authoritative perspectives on animate materials across different disciplines and scales, highlighting their interdisciplinary nature and potential applications in diverse fields including nanotechnology, robotics and the built environment. It underscores the need for concerted efforts to address shared challenges such as complexity management, scalability, evolvability, interdisciplinary collaboration, and ethical and environmental considerations. The framework defined by classifying materials based on their level of animacy can guide this emerging field to encourage cooperation and responsible development. By unravelling the mysteries of living matter and leveraging its principles, we can design materials and systems that will transform our world in a more sustainable manner.

Soft Matter Lab members present at SPIE Optics+Photonics conference in San Diego, 3-7 August 2025

The Soft Matter Lab participates to the SPIE Optics+Photonics conference in San Diego, CA, USA, 3-7 August 2025, with the presentations listed below.

Giovanni Volpe, who serves as Symposium Chair for the SPIE Optics+Photonics Congress in 2025, is a coauthor of the following invited presentations:

Giovanni Volpe will also be the reference presenter of the following Poster contributions:

Poster by A. Callegari at SPIE-OTOM, San Diego, 4 August 2025

One exemplar of the HEXBUGS used in the experiment. (Image by the Authors of the manuscript.)
Experimenting with macroscopic active matter
Angelo Barona Balda, Aykut Argun, Agnese Callegari, Giovanni Volpe
SPIE-OTOM, San Diego, CA, USA, 3 – 7 August 2025
Date: 4 August 2025
Time: 5:30 PM – 7:30 PM PDT
Place: Conv. Ctr. Exhibit Hall A

Presenter: Giovanni Volpe
Contribution submitted by Agnese Callegari

Active matter is based on concepts of nonequilibrium thermodynamics applied to the most diverse disciplines. A key concept is the active Brownian particle, which, unlike passive ones, extracts energy from its environment to generate complex motion and emergent behaviors. Despite its significance, active matter remains absent from standard curricula. This work presents macroscopic experiments using commercially available Hexbugs to demonstrate active matter phenomena. We show how Hexbugs can be modified to perform both regular and chiral active Brownian motion and interact with passive objects, inducing movement and rotation. By introducing obstacles, we sort Hexbugs based on motility and chirality. Finally, we demonstrate a Casimir-like attraction effect between planar objects in the presence of active particles.

Reference
Angelo Barona Balda, Aykut Argun, Agnese Callegari, Giovanni Volpe
Playing with Active Matter, Americal Journal of Physics 92, 847–858 (2024)

Poster by A. Callegari at SPIE-ETAI, San Diego, 4 August 2025

Focused rays scattered by an ellipsoidal particles (left). Optical torque along y calculated in the x-y plane using ray scattering with a grid of 1600 rays (up, right) and using a trained neural network (down, right). (Image by the Authors of the manuscript.)
Dense neural networks for geometrical optics
David Bronte Ciriza, Alessandro Magazzù, Agnese Callegari, Gunther Barbosa, Antonio A. R. Neves, Maria Antonia Iatì, Giovanni Volpe, and Onofrio M. Maragò
SPIE-ETAI, San Diego, CA, USA, 3 – 7 August 2025
Date: 4 August 2025
Time: 5:30 PM – 7:30 PM PDT
Place: Conv. Ctr. Exhibit Hall A

Presenter: Giovanni Volpe
Contribution submitted by Agnese Callegari

Light can trap and manipulate microscopic objects through optical forces and torques, as seen in optical tweezers. Predicting these forces is crucial for experiments and setup design. This study focuses on the geometrical optics regime, which applies to particles much larger than the light’s wavelength. In this model, a beam is represented by discrete rays that undergo multiple reflections and refractions, transferring momentum and angular momentum. However, the choice of ray discretization affects the balance between computational speed and accuracy. We demonstrate that neural networks overcome this limitation, enabling faster and even more precise simulations. Using an optically trapped spherical particle with an analytical solution as a benchmark, we validate our method and apply it to study complex systems that would otherwise be computationally hard.

Reference
David Bronte Ciriza, Alessandro Magazzù, Agnese Callegari, Gunther Barbosa, Antonio A. R. Neves, Maria A. Iatì, Giovanni Volpe, Onofrio M. Maragò, Faster and more accurate geometrical-optics optical force calculation using neural networks, ACS Photonics 10, 234–241 (2023)

Quantitative evaluation of methods to analyze motion changes in single-particle experiments published on Nature Communications

Rationale for the challenge organization. The interactions of biomolecules in complex environments, such as the cell membrane, regulate physiological processes in living systems. These interactions produce changes in molecular motion that can be used as a proxy to measure interaction parameters. Time-lapse single-molecule imaging allows us to visualize these processes with high spatiotemporal resolution and, in combination with single-particle tracking methods, provide trajectories of individual molecules. (Image by the Authors of the manuscript.)
Quantitative evaluation of methods to analyze motion changes in single-particle experiments
Gorka Muñoz-Gil, Harshith Bachimanchi, Jesús Pineda, Benjamin Midtvedt, Gabriel Fernández-Fernández, Borja Requena, Yusef Ahsini, Solomon Asghar, Jaeyong Bae, Francisco J. Barrantes, Steen W. B. Bender, Clément Cabriel, J. Alberto Conejero, Marc Escoto, Xiaochen Feng, Rasched Haidari, Nikos S. Hatzakis, Zihan Huang, Ignacio Izeddin, Hawoong Jeong, Yuan Jiang, Jacob Kæstel-Hansen, Judith Miné-Hattab, Ran Ni, Junwoo Park, Xiang Qu, Lucas A. Saavedra, Hao Sha, Nataliya Sokolovska, Yongbing Zhang, Giorgio Volpe, Maciej Lewenstein, Ralf Metzler, Diego Krapf, Giovanni Volpe, Carlo Manzo
Nature Communications 16, 6749 (2025)
arXiv: 2311.18100
doi: https://doi.org/10.1038/s41467-025-61949-x

The analysis of live-cell single-molecule imaging experiments can reveal valuable information about the heterogeneity of transport processes and interactions between cell components. These characteristics are seen as motion changes in the particle trajectories. Despite the existence of multiple approaches to carry out this type of analysis, no objective assessment of these methods has been performed so far. Here, we report the results of a competition to characterize and rank the performance of these methods when analyzing the dynamic behavior of single molecules. To run this competition, we implemented a software library that simulates realistic data corresponding to widespread diffusion and interaction models, both in the form of trajectories and videos obtained in typical experimental conditions. The competition constitutes the first assessment of these methods, providing insights into the current limitations of the field, fostering the development of new approaches, and guiding researchers to identify optimal tools for analyzing their experiments.

Deep-Learning Investigation of Vibrational Raman Spectra for Plant-Stress Analysis on ArXiv

In this work, we present an unsupervised deep learning framework using Variational Autoencoders (VAEs) to decode stress-specific biomolecular fingerprints directly from Raman spectral data across multiple plant species and genotypes. (Image by the Authors of the manuscript. A part of the image was designed using Biorender.com.)
From Spectra to Stress: Unsupervised Deep Learning for Plant Health Monitoring
Anoop C. Patil, Benny Jian Rong Sng, Yu-Wei Chang, Joana B. Pereira, Chua Nam-Hai, Rajani Sarojam, Gajendra Pratap Singh, In-Cheol Jang, and Giovanni Volpe
ArXiv: 2507.15772

Detecting stress in plants is crucial for both open-farm and controlled-environment agriculture. Biomolecules within plants serve as key stress indicators, offering vital markers for continuous health monitoring and early disease detection. Raman spectroscopy provides a powerful, non-invasive means to quantify these biomolecules through their molecular vibrational signatures. However, traditional Raman analysis relies on customized data-processing workflows that require fluorescence background removal and prior identification of Raman peaks of interest-introducing potential biases and inconsistencies. Here, we introduce DIVA (Deep-learning-based Investigation of Vibrational Raman spectra for plant-stress Analysis), a fully automated workflow based on a variational autoencoder. Unlike conventional approaches, DIVA processes native Raman spectra-including fluorescence backgrounds-without manual preprocessing, identifying and quantifying significant spectral features in an unbiased manner. We applied DIVA to detect a range of plant stresses, including abiotic (shading, high light intensity, high temperature) and biotic stressors (bacterial infections). By integrating deep learning with vibrational spectroscopy, DIVA paves the way for AI-driven plant health assessment, fostering more resilient and sustainable agricultural practices.

Latent Space-Driven Quantification of Biofilm Formation using Time Resolved Droplet Microfluidics on ArXiv

Automated segnmentation of bacterial structures within a droplet. The image shows a bright-field microscopy view where a large biofilm region (green, outlined in blue) has been segmented from surrounding features. Small aggregates (yellow contours) are also highlighted. This segmentation enables structural differentiation of biofilm components for downstream quantitative analysis. (Image by D. Pérez Guerrero.)
Latent Space-Driven Quantification of Biofilm Formation using Time Resolved Droplet Microfluidics
Daniela Pérez Guerrero, Jesús Manuel Antúnez Domínguez, Aurélie Vigne, Daniel Midtvedt, Wylie Ahmed, Lisa D. Muiznieks, Giovanni Volpe, Caroline Beck Adiels
arXiv: 2507.07632

Bacterial biofilms play a significant role in various fields that impact our daily lives, from detrimental public health hazards to beneficial applications in bioremediation, biodegradation, and wastewater treatment. However, high-resolution tools for studying their dynamic responses to environmental changes and collective cellular behavior remain scarce. To characterize and quantify biofilm development, we present a droplet-based microfluidic platform combined with an image analysis tool for in-situ studies. In this setup, Bacillus subtilis was inoculated in liquid Lysogeny Broth microdroplets, and biofilm formation was examined within emulsions at the water-oil interface. Bacteria were encapsulated in droplets, which were then trapped in compartments, allowing continuous optical access throughout biofilm formation. Droplets, each forming a distinct microenvironment, were generated at high throughput using flow-controlled pressure pumps, ensuring monodispersity. A microfluidic multi-injection valve enabled rapid switching of encapsulation conditions without disrupting droplet generation, allowing side-by-side comparison. Our platform supports fluorescence microscopy imaging and quantitative analysis of droplet content, along with time-lapse bright-field microscopy for dynamic observations. To process high-throughput, complex data, we integrated an automated, unsupervised image analysis tool based on a Variational Autoencoder (VAE). This AI-driven approach efficiently captured biofilm structures in a latent space, enabling detailed pattern recognition and analysis. Our results demonstrate the accurate detection and quantification of biofilms using thresholding and masking applied to latent space representations, enabling the precise measurement of biofilm and aggregate areas.

Seminar by G. Volpe and C. Manzo at CIG, Makerere University, Kampala, Uganda, 3 July 2025 (Online)

GAUDI leverages a hierarchical graph-convolutional variational autoencoder architecture, where an encoder progressively compresses the graph into a low-dimensional latent space, and a decoder reconstructs the graph from the latent embedding. (Image by M. Granfors and J. Pineda.)
Cutting Training Data Needs through Inductive Bias & Unsupervised Learning
Giovanni Volpe and Carlo Manzo
Computational Intelligence Group (CIG), Weekly Reading Session
Date: 3 July 2025
Time: 17:00
Place: Makerere University, Kampala, Uganda (Online)

Graphs provide a powerful framework for modeling complex systems, but their structural variability makes analysis and classification challenging. To address this, we introduce GAUDI (Graph Autoencoder Uncovering Descriptive Information), a novel unsupervised geometric deep learning framework that captures both local details and global structure. GAUDI employs an innovative hourglass architecture with hierarchical pooling and upsampling layers, linked through skip connections to preserve essential connectivity information throughout the encoding–decoding process. By mapping different realizations of a system — generated from the same underlying parameters — into a continuous, structured latent space, GAUDI disentangles invariant process-level features from stochastic noise. We demonstrate its power across multiple applications, including modeling small-world networks, characterizing protein assemblies from super-resolution microscopy, analyzing collective motion in the Vicsek model, and capturing age-related changes in brain connectivity. This approach not only improves the analysis of complex graphs but also provides new insights into emergent phenomena across diverse scientific domains.

Youtube: Global graph features unveiled by unsupervised geometric deep learning

Invited Talk by G. Volpe at ELS XXI, Milazzo, Italy, 27 June 2025.

DeepTrack 2 Logo. (Image from DeepTrack 2 Project)
What can deep learning do for electromagnetic light scattering?
Giovanni Volpe
Electromagnetic and Light Scattering (ELS) XXI
Date: 27 June 2025
Time: 9:00
Place: Milazzo, Italy

Electromagnetic light scattering underpins a wide range of phenomena in both fundamental and applied research, from characterizing complex materials to tracking particles and cells in microfluidic devices. Video microscopy, in particular, has become a powerful method for studying scattering processes and extracting quantitative information. Yet, conventional algorithmic approaches for analyzing scattering data often prove cumbersome, computationally expensive, and highly specialized.
Recent advances in deep learning offer a compelling alternative. By leveraging data-driven models, we can automate the extraction of scattering characteristics with unprecedented speed and accuracy—uncovering insights that classical techniques might miss or require substantial computation to achieve. Despite these advantages, deep-learning-based tools remain underutilized in light-scattering research, largely because of the steep learning curve required to design and train such models.
To address these challenges, we have developed a user-friendly software platform (DeepTrack, now in version 2.2) that simplifies the entire workflow of deep-learning applications in digital microscopy. DeepTrack enables straightforward creation of custom datasets, network architectures, and training pipelines specifically tailored for quantitative scattering analyses. In this talk, I will discuss how emerging deep-learning methods can be combined with advanced imaging technologies to push the boundaries of electromagnetic light scattering research—reducing computational overhead, improving accuracy, and ultimately broadening access to powerful, data-driven solutions.