Quantitative evaluation of methods to analyze motion changes in single-particle experiments published on Nature Communications

Rationale for the challenge organization. The interactions of biomolecules in complex environments, such as the cell membrane, regulate physiological processes in living systems. These interactions produce changes in molecular motion that can be used as a proxy to measure interaction parameters. Time-lapse single-molecule imaging allows us to visualize these processes with high spatiotemporal resolution and, in combination with single-particle tracking methods, provide trajectories of individual molecules. (Image by the Authors of the manuscript.)
Quantitative evaluation of methods to analyze motion changes in single-particle experiments
Gorka Muñoz-Gil, Harshith Bachimanchi, Jesús Pineda, Benjamin Midtvedt, Gabriel Fernández-Fernández, Borja Requena, Yusef Ahsini, Solomon Asghar, Jaeyong Bae, Francisco J. Barrantes, Steen W. B. Bender, Clément Cabriel, J. Alberto Conejero, Marc Escoto, Xiaochen Feng, Rasched Haidari, Nikos S. Hatzakis, Zihan Huang, Ignacio Izeddin, Hawoong Jeong, Yuan Jiang, Jacob Kæstel-Hansen, Judith Miné-Hattab, Ran Ni, Junwoo Park, Xiang Qu, Lucas A. Saavedra, Hao Sha, Nataliya Sokolovska, Yongbing Zhang, Giorgio Volpe, Maciej Lewenstein, Ralf Metzler, Diego Krapf, Giovanni Volpe, Carlo Manzo
Nature Communications 16, 6749 (2025)
arXiv: 2311.18100
doi: https://doi.org/10.1038/s41467-025-61949-x

The analysis of live-cell single-molecule imaging experiments can reveal valuable information about the heterogeneity of transport processes and interactions between cell components. These characteristics are seen as motion changes in the particle trajectories. Despite the existence of multiple approaches to carry out this type of analysis, no objective assessment of these methods has been performed so far. Here, we report the results of a competition to characterize and rank the performance of these methods when analyzing the dynamic behavior of single molecules. To run this competition, we implemented a software library that simulates realistic data corresponding to widespread diffusion and interaction models, both in the form of trajectories and videos obtained in typical experimental conditions. The competition constitutes the first assessment of these methods, providing insights into the current limitations of the field, fostering the development of new approaches, and guiding researchers to identify optimal tools for analyzing their experiments.

Deep-Learning Investigation of Vibrational Raman Spectra for Plant-Stress Analysis on ArXiv

In this work, we present an unsupervised deep learning framework using Variational Autoencoders (VAEs) to decode stress-specific biomolecular fingerprints directly from Raman spectral data across multiple plant species and genotypes. (Image by the Authors of the manuscript. A part of the image was designed using Biorender.com.)
From Spectra to Stress: Unsupervised Deep Learning for Plant Health Monitoring
Anoop C. Patil, Benny Jian Rong Sng, Yu-Wei Chang, Joana B. Pereira, Chua Nam-Hai, Rajani Sarojam, Gajendra Pratap Singh, In-Cheol Jang, and Giovanni Volpe
ArXiv: 2507.15772

Detecting stress in plants is crucial for both open-farm and controlled-environment agriculture. Biomolecules within plants serve as key stress indicators, offering vital markers for continuous health monitoring and early disease detection. Raman spectroscopy provides a powerful, non-invasive means to quantify these biomolecules through their molecular vibrational signatures. However, traditional Raman analysis relies on customized data-processing workflows that require fluorescence background removal and prior identification of Raman peaks of interest-introducing potential biases and inconsistencies. Here, we introduce DIVA (Deep-learning-based Investigation of Vibrational Raman spectra for plant-stress Analysis), a fully automated workflow based on a variational autoencoder. Unlike conventional approaches, DIVA processes native Raman spectra-including fluorescence backgrounds-without manual preprocessing, identifying and quantifying significant spectral features in an unbiased manner. We applied DIVA to detect a range of plant stresses, including abiotic (shading, high light intensity, high temperature) and biotic stressors (bacterial infections). By integrating deep learning with vibrational spectroscopy, DIVA paves the way for AI-driven plant health assessment, fostering more resilient and sustainable agricultural practices.

Seminar by G. Volpe and C. Manzo at CIG, Makerere University, Kampala, Uganda, 3 July 2025 (Online)

GAUDI leverages a hierarchical graph-convolutional variational autoencoder architecture, where an encoder progressively compresses the graph into a low-dimensional latent space, and a decoder reconstructs the graph from the latent embedding. (Image by M. Granfors and J. Pineda.)
Cutting Training Data Needs through Inductive Bias & Unsupervised Learning
Giovanni Volpe and Carlo Manzo
Computational Intelligence Group (CIG), Weekly Reading Session
Date: 3 July 2025
Time: 17:00
Place: Makerere University, Kampala, Uganda (Online)

Graphs provide a powerful framework for modeling complex systems, but their structural variability makes analysis and classification challenging. To address this, we introduce GAUDI (Graph Autoencoder Uncovering Descriptive Information), a novel unsupervised geometric deep learning framework that captures both local details and global structure. GAUDI employs an innovative hourglass architecture with hierarchical pooling and upsampling layers, linked through skip connections to preserve essential connectivity information throughout the encoding–decoding process. By mapping different realizations of a system — generated from the same underlying parameters — into a continuous, structured latent space, GAUDI disentangles invariant process-level features from stochastic noise. We demonstrate its power across multiple applications, including modeling small-world networks, characterizing protein assemblies from super-resolution microscopy, analyzing collective motion in the Vicsek model, and capturing age-related changes in brain connectivity. This approach not only improves the analysis of complex graphs but also provides new insights into emergent phenomena across diverse scientific domains.

Youtube: Global graph features unveiled by unsupervised geometric deep learning

Invited Talk by G. Volpe at ELS XXI, Milazzo, Italy, 27 June 2025.

DeepTrack 2 Logo. (Image from DeepTrack 2 Project)
What can deep learning do for electromagnetic light scattering?
Giovanni Volpe
Electromagnetic and Light Scattering (ELS) XXI
Date: 27 June 2025
Time: 9:00
Place: Milazzo, Italy

Electromagnetic light scattering underpins a wide range of phenomena in both fundamental and applied research, from characterizing complex materials to tracking particles and cells in microfluidic devices. Video microscopy, in particular, has become a powerful method for studying scattering processes and extracting quantitative information. Yet, conventional algorithmic approaches for analyzing scattering data often prove cumbersome, computationally expensive, and highly specialized.
Recent advances in deep learning offer a compelling alternative. By leveraging data-driven models, we can automate the extraction of scattering characteristics with unprecedented speed and accuracy—uncovering insights that classical techniques might miss or require substantial computation to achieve. Despite these advantages, deep-learning-based tools remain underutilized in light-scattering research, largely because of the steep learning curve required to design and train such models.
To address these challenges, we have developed a user-friendly software platform (DeepTrack, now in version 2.2) that simplifies the entire workflow of deep-learning applications in digital microscopy. DeepTrack enables straightforward creation of custom datasets, network architectures, and training pipelines specifically tailored for quantitative scattering analyses. In this talk, I will discuss how emerging deep-learning methods can be combined with advanced imaging technologies to push the boundaries of electromagnetic light scattering research—reducing computational overhead, improving accuracy, and ultimately broadening access to powerful, data-driven solutions.

Invited Seminar by G. Volpe at Cognitive and Behavior Changes in Parkinson’s Disease and Parkinsonism: Advances and Challenges, Santa Maria di Leuca, Italy, 21 May 2025

Braph 2 Logo. (Image from the Braph 2 Project)
The Role of Artificial Intelligence in Advanced Neuroimaging Analysis
Giovanni Volpe
Cognitive and Behavior Changes in Parkinson’s Disease and Parkinsonism: Advances and Challenges
Date: 21 May 2025
Time: 11:50
Place: Tricase, Santa Maria di Leuca, Italy

SmartTrap: Automated Precision Experiments with Optical Tweezers on ArXiv

Illustration of three different experiments autonomously performed by the SmartTrap system: DNA pulling experiments (top), red blood cell stretching (bottom left), and particle-particle interaction measurements (bottom right). (Image by M. Selin.)
SmartTrap: Automated Precision Experiments with Optical Tweezers
Martin Selin, Antonio Ciarlo, Giuseppe Pesce, Lars Bengtsson, Joan Camunas-Soler, Vinoth Sundar Rajan, Fredrik Westerlund, L. Marcus Wilhelmsson, Isabel Pastor, Felix Ritort, Steven B. Smith, Carlos Bustamante, Giovanni Volpe
arXiv: 2505.05290

There is a trend in research towards more automation using smart systems powered by artificial intelligence. While experiments are often challenging to automate, they can greatly benefit from automation by reducing labor and  increasing reproducibility. For example, optical tweezers are widely employed in single-molecule biophysics, cell biomechanics, and soft matter physics, but they still require a human operator, resulting in low throughput and limited repeatability. Here, we present a smart optical tweezers platform, which we name SmartTrap, capable of performing complex experiments completely autonomously. SmartTrap integrates real-time 3D particle tracking using
deep learning, custom electronics for precise feedback control, and a microfluidic setup for particle handling. We demonstrate the ability of SmartTrap to operate continuously, acquiring high-precision data over extended periods of time, through a series of experiments. By bridging the gap between manual  experimentation and autonomous operation, SmartTrap establishes a robust and open source framework for the next generation of optical tweezers research, capable of performing large-scale studies in single-molecule biophysics, cell mechanics, and colloidal science with reduced experimental
overhead and operator bias.

Invited Talk by G. Volpe at OPIC/OMC 2025, Yokohama, Japan, 21 April 2025 (Online, Pre-recorded)

DeepTrack 2 Logo. (Image from DeepTrack 2 Project)
How can deep learning enhance microscopy?
Giovanni Volpe
Optics & Photonics International Congress 2025 (OPIC 2025), The 11th Optical Manipulation and Structured Materials Conference (OMC2025)
Date: 21 April 2025
Time: 13:45 JST
Place: Yokohama, Japan (Online, Pre-recorded)

BRAPH 2: a flexible, open-source, reproducible, community-oriented, easy-to-use framework for network analyses in neurosciences on bioRxiv

BRAPH 2 Genesis enables swift creation of custom, reproducible software distributions—tackling the growing complexity of neuroscience by streamlining analysis across diverse data types and workflows. (Image by B. Zufiria-Gerbolés and Y.-W. Chang.)
BRAPH 2: a flexible, open-source, reproducible, community-oriented, easy-to-use framework for network analyses in neurosciences
Yu-Wei Chang, Blanca Zufiria-Gerbolés, Pablo Emiliano Gómez-Ruiz, Anna Canal-Garcia, Hang Zhao, Mite Mijalkov, Joana Braga Pereira, Giovanni Volpe
bioRxiv: 10.1101/2025.04.11.648455

As network analyses in neuroscience continue to grow in both complexity and size, flexible methods are urgently needed to provide unbiased, reproducible insights into brain function. BRAPH 2 is a versatile, open-source framework that meets this challenge by offering streamlined workflows for advanced statistical models and deep learning in a community-oriented environment. Through its Genesis compiler, users can build specialized distributions with custom pipelines, ensuring flexibility and scalability across diverse research domains. These powerful capabilities will ensure reproducibility and accelerate discoveries in neuroscience.

Computational memory capacity predicts aging and cognitive decline published in Nature Communications

Memory capacity in aging. A Brain reservoir computing architecture with uniform random signals applied to all nodes. (Image from the article.)
Computational memory capacity predicts aging and cognitive decline
Mite Mijalkov, Ludvig Storm, Blanca Zufiria-Gerbolés, Dániel Veréb, Zhilei Xu, Anna Canal-Garcia, Jiawei Sun, Yu-Wei Chang, Hang Zhao, Emiliano Gómez-Ruiz, Massimiliano Passaretti, Sara Garcia-Ptacek, Miia Kivipelto, Per Svenningsson, Henrik Zetterberg, Heidi Jacobs, Kathy Lüdge, Daniel Brunner, Bernhard Mehlig, Giovanni Volpe, Joana B. Pereira
Nature Communications 16, 2748 (2025)
doi: 10.1038/s41467-025-57995-0

Memory is a crucial cognitive function that deteriorates with age. However, this ability is normally assessed using cognitive tests instead of the architecture of brain networks. Here, we use reservoir computing, a recurrent neural network computing paradigm, to assess the linear memory capacities of neural-network reservoirs extracted from brain anatomical connectivity data in a lifespan cohort of 636 individuals. The computational memory capacity emerges as a robust marker of aging, being associated with resting-state functional activity, white matter integrity, locus coeruleus signal intensity, and cognitive performance. We replicate our findings in an independent cohort of 154 young and 72 old individuals. By linking the computational memory capacity of the brain network with cognition, brain function and integrity, our findings open new pathways to employ reservoir computing to investigate aging and age-related disorders.