Quantitative evaluation of methods to analyze motion changes in single-particle experiments published on Nature Communications

Rationale for the challenge organization. The interactions of biomolecules in complex environments, such as the cell membrane, regulate physiological processes in living systems. These interactions produce changes in molecular motion that can be used as a proxy to measure interaction parameters. Time-lapse single-molecule imaging allows us to visualize these processes with high spatiotemporal resolution and, in combination with single-particle tracking methods, provide trajectories of individual molecules. (Image by the Authors of the manuscript.)
Quantitative evaluation of methods to analyze motion changes in single-particle experiments
Gorka Muñoz-Gil, Harshith Bachimanchi, Jesús Pineda, Benjamin Midtvedt, Gabriel Fernández-Fernández, Borja Requena, Yusef Ahsini, Solomon Asghar, Jaeyong Bae, Francisco J. Barrantes, Steen W. B. Bender, Clément Cabriel, J. Alberto Conejero, Marc Escoto, Xiaochen Feng, Rasched Haidari, Nikos S. Hatzakis, Zihan Huang, Ignacio Izeddin, Hawoong Jeong, Yuan Jiang, Jacob Kæstel-Hansen, Judith Miné-Hattab, Ran Ni, Junwoo Park, Xiang Qu, Lucas A. Saavedra, Hao Sha, Nataliya Sokolovska, Yongbing Zhang, Giorgio Volpe, Maciej Lewenstein, Ralf Metzler, Diego Krapf, Giovanni Volpe, Carlo Manzo
Nature Communications 16, 6749 (2025)
arXiv: 2311.18100
doi: https://doi.org/10.1038/s41467-025-61949-x

The analysis of live-cell single-molecule imaging experiments can reveal valuable information about the heterogeneity of transport processes and interactions between cell components. These characteristics are seen as motion changes in the particle trajectories. Despite the existence of multiple approaches to carry out this type of analysis, no objective assessment of these methods has been performed so far. Here, we report the results of a competition to characterize and rank the performance of these methods when analyzing the dynamic behavior of single molecules. To run this competition, we implemented a software library that simulates realistic data corresponding to widespread diffusion and interaction models, both in the form of trajectories and videos obtained in typical experimental conditions. The competition constitutes the first assessment of these methods, providing insights into the current limitations of the field, fostering the development of new approaches, and guiding researchers to identify optimal tools for analyzing their experiments.

Deep-Learning Investigation of Vibrational Raman Spectra for Plant-Stress Analysis on ArXiv

In this work, we present an unsupervised deep learning framework using Variational Autoencoders (VAEs) to decode stress-specific biomolecular fingerprints directly from Raman spectral data across multiple plant species and genotypes. (Image by the Authors of the manuscript. A part of the image was designed using Biorender.com.)
From Spectra to Stress: Unsupervised Deep Learning for Plant Health Monitoring
Anoop C. Patil, Benny Jian Rong Sng, Yu-Wei Chang, Joana B. Pereira, Chua Nam-Hai, Rajani Sarojam, Gajendra Pratap Singh, In-Cheol Jang, and Giovanni Volpe
ArXiv: 2507.15772

Detecting stress in plants is crucial for both open-farm and controlled-environment agriculture. Biomolecules within plants serve as key stress indicators, offering vital markers for continuous health monitoring and early disease detection. Raman spectroscopy provides a powerful, non-invasive means to quantify these biomolecules through their molecular vibrational signatures. However, traditional Raman analysis relies on customized data-processing workflows that require fluorescence background removal and prior identification of Raman peaks of interest-introducing potential biases and inconsistencies. Here, we introduce DIVA (Deep-learning-based Investigation of Vibrational Raman spectra for plant-stress Analysis), a fully automated workflow based on a variational autoencoder. Unlike conventional approaches, DIVA processes native Raman spectra-including fluorescence backgrounds-without manual preprocessing, identifying and quantifying significant spectral features in an unbiased manner. We applied DIVA to detect a range of plant stresses, including abiotic (shading, high light intensity, high temperature) and biotic stressors (bacterial infections). By integrating deep learning with vibrational spectroscopy, DIVA paves the way for AI-driven plant health assessment, fostering more resilient and sustainable agricultural practices.

Latent Space-Driven Quantification of Biofilm Formation using Time Resolved Droplet Microfluidics on ArXiv

Automated segnmentation of bacterial structures within a droplet. The image shows a bright-field microscopy view where a large biofilm region (green, outlined in blue) has been segmented from surrounding features. Small aggregates (yellow contours) are also highlighted. This segmentation enables structural differentiation of biofilm components for downstream quantitative analysis. (Image by D. Pérez Guerrero.)
Latent Space-Driven Quantification of Biofilm Formation using Time Resolved Droplet Microfluidics
Daniela Pérez Guerrero, Jesús Manuel Antúnez Domínguez, Aurélie Vigne, Daniel Midtvedt, Wylie Ahmed, Lisa D. Muiznieks, Giovanni Volpe, Caroline Beck Adiels
arXiv: 2507.07632

Bacterial biofilms play a significant role in various fields that impact our daily lives, from detrimental public health hazards to beneficial applications in bioremediation, biodegradation, and wastewater treatment. However, high-resolution tools for studying their dynamic responses to environmental changes and collective cellular behavior remain scarce. To characterize and quantify biofilm development, we present a droplet-based microfluidic platform combined with an image analysis tool for in-situ studies. In this setup, Bacillus subtilis was inoculated in liquid Lysogeny Broth microdroplets, and biofilm formation was examined within emulsions at the water-oil interface. Bacteria were encapsulated in droplets, which were then trapped in compartments, allowing continuous optical access throughout biofilm formation. Droplets, each forming a distinct microenvironment, were generated at high throughput using flow-controlled pressure pumps, ensuring monodispersity. A microfluidic multi-injection valve enabled rapid switching of encapsulation conditions without disrupting droplet generation, allowing side-by-side comparison. Our platform supports fluorescence microscopy imaging and quantitative analysis of droplet content, along with time-lapse bright-field microscopy for dynamic observations. To process high-throughput, complex data, we integrated an automated, unsupervised image analysis tool based on a Variational Autoencoder (VAE). This AI-driven approach efficiently captured biofilm structures in a latent space, enabling detailed pattern recognition and analysis. Our results demonstrate the accurate detection and quantification of biofilms using thresholding and masking applied to latent space representations, enabling the precise measurement of biofilm and aggregate areas.

Seminar by G. Volpe and C. Manzo at CIG, Makerere University, Kampala, Uganda, 3 July 2025 (Online)

GAUDI leverages a hierarchical graph-convolutional variational autoencoder architecture, where an encoder progressively compresses the graph into a low-dimensional latent space, and a decoder reconstructs the graph from the latent embedding. (Image by M. Granfors and J. Pineda.)
Cutting Training Data Needs through Inductive Bias & Unsupervised Learning
Giovanni Volpe and Carlo Manzo
Computational Intelligence Group (CIG), Weekly Reading Session
Date: 3 July 2025
Time: 17:00
Place: Makerere University, Kampala, Uganda (Online)

Graphs provide a powerful framework for modeling complex systems, but their structural variability makes analysis and classification challenging. To address this, we introduce GAUDI (Graph Autoencoder Uncovering Descriptive Information), a novel unsupervised geometric deep learning framework that captures both local details and global structure. GAUDI employs an innovative hourglass architecture with hierarchical pooling and upsampling layers, linked through skip connections to preserve essential connectivity information throughout the encoding–decoding process. By mapping different realizations of a system — generated from the same underlying parameters — into a continuous, structured latent space, GAUDI disentangles invariant process-level features from stochastic noise. We demonstrate its power across multiple applications, including modeling small-world networks, characterizing protein assemblies from super-resolution microscopy, analyzing collective motion in the Vicsek model, and capturing age-related changes in brain connectivity. This approach not only improves the analysis of complex graphs but also provides new insights into emergent phenomena across diverse scientific domains.

Youtube: Global graph features unveiled by unsupervised geometric deep learning

Invited Talk by G. Volpe at ELS XXI, Milazzo, Italy, 27 June 2025.

DeepTrack 2 Logo. (Image from DeepTrack 2 Project)
What can deep learning do for electromagnetic light scattering?
Giovanni Volpe
Electromagnetic and Light Scattering (ELS) XXI
Date: 27 June 2025
Time: 9:00
Place: Milazzo, Italy

Electromagnetic light scattering underpins a wide range of phenomena in both fundamental and applied research, from characterizing complex materials to tracking particles and cells in microfluidic devices. Video microscopy, in particular, has become a powerful method for studying scattering processes and extracting quantitative information. Yet, conventional algorithmic approaches for analyzing scattering data often prove cumbersome, computationally expensive, and highly specialized.
Recent advances in deep learning offer a compelling alternative. By leveraging data-driven models, we can automate the extraction of scattering characteristics with unprecedented speed and accuracy—uncovering insights that classical techniques might miss or require substantial computation to achieve. Despite these advantages, deep-learning-based tools remain underutilized in light-scattering research, largely because of the steep learning curve required to design and train such models.
To address these challenges, we have developed a user-friendly software platform (DeepTrack, now in version 2.2) that simplifies the entire workflow of deep-learning applications in digital microscopy. DeepTrack enables straightforward creation of custom datasets, network architectures, and training pipelines specifically tailored for quantitative scattering analyses. In this talk, I will discuss how emerging deep-learning methods can be combined with advanced imaging technologies to push the boundaries of electromagnetic light scattering research—reducing computational overhead, improving accuracy, and ultimately broadening access to powerful, data-driven solutions.

Invited Seminar by G. Volpe at Cognitive and Behavior Changes in Parkinson’s Disease and Parkinsonism: Advances and Challenges, Santa Maria di Leuca, Italy, 21 May 2025

Braph 2 Logo. (Image from the Braph 2 Project)
The Role of Artificial Intelligence in Advanced Neuroimaging Analysis
Giovanni Volpe
Cognitive and Behavior Changes in Parkinson’s Disease and Parkinsonism: Advances and Challenges
Date: 21 May 2025
Time: 11:50
Place: Tricase, Santa Maria di Leuca, Italy

Delayed Active Swimmer in a Velocity Landscape on ArXiv

Experimental setup. (Top) Thermophoretic microswimmer undergoes active Brownian motion in a spatially-varying laser intensity profile that controls the self-thermophoretic propulsion of the swimmer using a feedback loop. (Bottom) Sample trajectory of the microswimmer over 15 minutes in a chamber. Colors indicate instantaneous velocity. (Image from the manuscript.)
Delayed Active Swimmer in a Velocity Landscape
Viktor Holubec, Alexander Fischer, Giovanni Volpe, Frank Cichos
arXiv: 2505.11042

Self-propelled active particles exhibit delayed responses to environmental changes, modulating their propulsion speed through intrinsic sensing and feedback mechanisms. This adaptive behavior fundamentally determines their dynamics and self-organization in active matter systems, with implications for biological microswimmers and engineered microrobots. Here, we investigate active Brownian particles whose propulsion speed is governed by spatially varying activity landscapes, incorporating a temporal delay between environmental sensing and speed adaptation. Through analytical solutions derived for both short-time and long-time delay regimes, we demonstrate that steady-state density and polarization profiles exhibit maxima at characteristic delays. Significantly, we observe that the polarization profile undergoes sign reversal when the swimming distance during the delay time exceeds the characteristic diffusion length, providing a novel mechanism for controlling particle transport without external fields. Our theoretical predictions, validated through experimental observations and numerical simulations, establish time delay as a crucial control parameter for particle transport and organization in active matter systems. These findings provide insights into how biological microorganisms might use response delays to gain navigation advantages and suggest design principles for synthetic microswimmers with programmable responses to heterogeneous environments.

SmartTrap: Automated Precision Experiments with Optical Tweezers on ArXiv

Illustration of three different experiments autonomously performed by the SmartTrap system: DNA pulling experiments (top), red blood cell stretching (bottom left), and particle-particle interaction measurements (bottom right). (Image by M. Selin.)
SmartTrap: Automated Precision Experiments with Optical Tweezers
Martin Selin, Antonio Ciarlo, Giuseppe Pesce, Lars Bengtsson, Joan Camunas-Soler, Vinoth Sundar Rajan, Fredrik Westerlund, L. Marcus Wilhelmsson, Isabel Pastor, Felix Ritort, Steven B. Smith, Carlos Bustamante, Giovanni Volpe
arXiv: 2505.05290

There is a trend in research towards more automation using smart systems powered by artificial intelligence. While experiments are often challenging to automate, they can greatly benefit from automation by reducing labor and  increasing reproducibility. For example, optical tweezers are widely employed in single-molecule biophysics, cell biomechanics, and soft matter physics, but they still require a human operator, resulting in low throughput and limited repeatability. Here, we present a smart optical tweezers platform, which we name SmartTrap, capable of performing complex experiments completely autonomously. SmartTrap integrates real-time 3D particle tracking using
deep learning, custom electronics for precise feedback control, and a microfluidic setup for particle handling. We demonstrate the ability of SmartTrap to operate continuously, acquiring high-precision data over extended periods of time, through a series of experiments. By bridging the gap between manual  experimentation and autonomous operation, SmartTrap establishes a robust and open source framework for the next generation of optical tweezers research, capable of performing large-scale studies in single-molecule biophysics, cell mechanics, and colloidal science with reduced experimental
overhead and operator bias.

Invited Talk by G. Volpe at OPIC/OMC 2025, Yokohama, Japan, 21 April 2025 (Online, Pre-recorded)

DeepTrack 2 Logo. (Image from DeepTrack 2 Project)
How can deep learning enhance microscopy?
Giovanni Volpe
Optics & Photonics International Congress 2025 (OPIC 2025), The 11th Optical Manipulation and Structured Materials Conference (OMC2025)
Date: 21 April 2025
Time: 13:45 JST
Place: Yokohama, Japan (Online, Pre-recorded)