Presentation by H. P. Thanabalan at SPIE-ETAI, San Diego, 5 August 2025

Inchworm-inspired soft robot. (Image by H. P. Thanabalan.)
Bio-inspired soft robot for multi-directionality
Hari Prakash Thanabalan, Lars Bengtsson, Ugo Lafont, Giovanni Volpe
SPIE Optics+Photonics, San Diego, CA, USA, 3-7 August 2025
Date: 5th August 2025
Time: 8:30 AM – 8:45 AM
Place: Conv. Ctr. Room 4

Soft robotics are the forefront of robotics evolution that leverages compliant materials such as silicone elastomer to mimic biological organisms. With infinite degrees of freedom, soft robots surpass rigid robots in adaptability making them ideal for exploration and manipulation tasks. Here we focus on inchworm inspired soft robot achieving multidirectional locomotion through groove-guided movement. By manipulating the groove angles on a substrate, we demonstrate multidirectional locomotion by utilising only a single actuator.

 

Poster by A. Callegari at SPIE-OTOM, San Diego, 4 August 2025

One exemplar of the HEXBUGS used in the experiment. (Image by the Authors of the manuscript.)
Experimenting with macroscopic active matter
Angelo Barona Balda, Aykut Argun, Agnese Callegari, Giovanni Volpe
SPIE-OTOM, San Diego, CA, USA, 3 – 7 August 2025
Date: 4 August 2025
Time: 5:30 PM – 7:30 PM PDT
Place: Conv. Ctr. Exhibit Hall A

Presenter: Giovanni Volpe
Contribution submitted by Agnese Callegari

Active matter is based on concepts of nonequilibrium thermodynamics applied to the most diverse disciplines. A key concept is the active Brownian particle, which, unlike passive ones, extracts energy from its environment to generate complex motion and emergent behaviors. Despite its significance, active matter remains absent from standard curricula. This work presents macroscopic experiments using commercially available Hexbugs to demonstrate active matter phenomena. We show how Hexbugs can be modified to perform both regular and chiral active Brownian motion and interact with passive objects, inducing movement and rotation. By introducing obstacles, we sort Hexbugs based on motility and chirality. Finally, we demonstrate a Casimir-like attraction effect between planar objects in the presence of active particles.

Reference
Angelo Barona Balda, Aykut Argun, Agnese Callegari, Giovanni Volpe
Playing with Active Matter, Americal Journal of Physics 92, 847–858 (2024)

Poster by A. Callegari at SPIE-ETAI, San Diego, 4 August 2025

Focused rays scattered by an ellipsoidal particles (left). Optical torque along y calculated in the x-y plane using ray scattering with a grid of 1600 rays (up, right) and using a trained neural network (down, right). (Image by the Authors of the manuscript.)
Dense neural networks for geometrical optics
David Bronte Ciriza, Alessandro Magazzù, Agnese Callegari, Gunther Barbosa, Antonio A. R. Neves, Maria Antonia Iatì, Giovanni Volpe, and Onofrio M. Maragò
SPIE-ETAI, San Diego, CA, USA, 3 – 7 August 2025
Date: 4 August 2025
Time: 5:30 PM – 7:30 PM PDT
Place: Conv. Ctr. Exhibit Hall A

Presenter: Giovanni Volpe
Contribution submitted by Agnese Callegari

Light can trap and manipulate microscopic objects through optical forces and torques, as seen in optical tweezers. Predicting these forces is crucial for experiments and setup design. This study focuses on the geometrical optics regime, which applies to particles much larger than the light’s wavelength. In this model, a beam is represented by discrete rays that undergo multiple reflections and refractions, transferring momentum and angular momentum. However, the choice of ray discretization affects the balance between computational speed and accuracy. We demonstrate that neural networks overcome this limitation, enabling faster and even more precise simulations. Using an optically trapped spherical particle with an analytical solution as a benchmark, we validate our method and apply it to study complex systems that would otherwise be computationally hard.

Reference
David Bronte Ciriza, Alessandro Magazzù, Agnese Callegari, Gunther Barbosa, Antonio A. R. Neves, Maria A. Iatì, Giovanni Volpe, Onofrio M. Maragò, Faster and more accurate geometrical-optics optical force calculation using neural networks, ACS Photonics 10, 234–241 (2023)

Presentation by A. Ciarlo at SPIE-OTOM, San Diego, 4 August 2025

Experimental trajectory (blue) of a particle trapped in air when the laser rotates at 1 Hz. The orange line represents the experimental laser trajectory. (Image by A. Ciarlo.)
Probing fluid dynamics inertial effects of particles using optical tweezers
Antonio Ciarlo, Giuseppe Pesce, Bernhard Mehlig, Antonio Sasso, and Giovanni Volpe
Date: 4 August 2025
Time: 11:45 AM – 12:00 PM
Place: Conv. Ctr. Room 3

Many natural phenomena involve dense particles suspended in a moving fluid, such as water droplets in clouds or dust grains in circumstellar disks. Studying these systems at the single particle level is challenging and requires precise control of flow and particle motion. Optical tweezers provide a powerful method for studying inertial effects in such environments. Here, we trap micrometer-sized particles in air and induce controlled dynamics by moving the trapping laser. We show that inertia becomes significant when the trap motion frequency is less than the harmonic trapping frequency, while at much higher motion frequencies, inertia has no effect. These results demonstrate the potential of trapping particles in air for studying inertial phenomena in fluids.

Seminar by G. Volpe and C. Manzo at CIG, Makerere University, Kampala, Uganda, 3 July 2025 (Online)

GAUDI leverages a hierarchical graph-convolutional variational autoencoder architecture, where an encoder progressively compresses the graph into a low-dimensional latent space, and a decoder reconstructs the graph from the latent embedding. (Image by M. Granfors and J. Pineda.)
Cutting Training Data Needs through Inductive Bias & Unsupervised Learning
Giovanni Volpe and Carlo Manzo
Computational Intelligence Group (CIG), Weekly Reading Session
Date: 3 July 2025
Time: 17:00
Place: Makerere University, Kampala, Uganda (Online)

Graphs provide a powerful framework for modeling complex systems, but their structural variability makes analysis and classification challenging. To address this, we introduce GAUDI (Graph Autoencoder Uncovering Descriptive Information), a novel unsupervised geometric deep learning framework that captures both local details and global structure. GAUDI employs an innovative hourglass architecture with hierarchical pooling and upsampling layers, linked through skip connections to preserve essential connectivity information throughout the encoding–decoding process. By mapping different realizations of a system — generated from the same underlying parameters — into a continuous, structured latent space, GAUDI disentangles invariant process-level features from stochastic noise. We demonstrate its power across multiple applications, including modeling small-world networks, characterizing protein assemblies from super-resolution microscopy, analyzing collective motion in the Vicsek model, and capturing age-related changes in brain connectivity. This approach not only improves the analysis of complex graphs but also provides new insights into emergent phenomena across diverse scientific domains.

Youtube: Global graph features unveiled by unsupervised geometric deep learning

Invited Talk by G. Volpe at ELS XXI, Milazzo, Italy, 27 June 2025.

DeepTrack 2 Logo. (Image from DeepTrack 2 Project)
What can deep learning do for electromagnetic light scattering?
Giovanni Volpe
Electromagnetic and Light Scattering (ELS) XXI
Date: 27 June 2025
Time: 9:00
Place: Milazzo, Italy

Electromagnetic light scattering underpins a wide range of phenomena in both fundamental and applied research, from characterizing complex materials to tracking particles and cells in microfluidic devices. Video microscopy, in particular, has become a powerful method for studying scattering processes and extracting quantitative information. Yet, conventional algorithmic approaches for analyzing scattering data often prove cumbersome, computationally expensive, and highly specialized.
Recent advances in deep learning offer a compelling alternative. By leveraging data-driven models, we can automate the extraction of scattering characteristics with unprecedented speed and accuracy—uncovering insights that classical techniques might miss or require substantial computation to achieve. Despite these advantages, deep-learning-based tools remain underutilized in light-scattering research, largely because of the steep learning curve required to design and train such models.
To address these challenges, we have developed a user-friendly software platform (DeepTrack, now in version 2.2) that simplifies the entire workflow of deep-learning applications in digital microscopy. DeepTrack enables straightforward creation of custom datasets, network architectures, and training pipelines specifically tailored for quantitative scattering analyses. In this talk, I will discuss how emerging deep-learning methods can be combined with advanced imaging technologies to push the boundaries of electromagnetic light scattering research—reducing computational overhead, improving accuracy, and ultimately broadening access to powerful, data-driven solutions.

Hari Prakash presented his half-time seminar on 10th June 2025

Half-time seminar in Nexus, with Prof. Bernhard Mehlig (examiner) and soft matter group. (Photo by A. Callegari.)
Hari Prakash completed the first half of his doctoral studies and he defended his half-time on the 10th of June 2025.

The presentation titled “Soft Robotic Platforms for Variable Conditions : From Adaptive Locomotion to Space Exploration” was held in hybrid form, both with part of the audience in Nexus room and through Zoom. The half-time consisted of a presentation about his past and planned projects, followed by a discussion and questions proposed by his opponent, Professor Bernhard Mehlig.

The presentation started with a short background introduction to soft robotics and bio-inpired soft robotics, followed by soft actuators used in the field of soft robotics and focused on the soft actuator used throughout his projects. He further then proceeded to introduce his first project and paper (which is under preparation) , “Inchworm-Inspired Soft Robot with Groove-Guided Locomotion,” and finally proceeded to introduce his second project “Soft Inchworm-Inspired Robot Fault-Tolerant Artificial Muscles for Planetary Exploration – Simulation of fault-tolerant artificial muscles under proton, neutron, and alpha irradiation”, a project in collaboration with the European Space Agency (ESA).

In the last section, he outlined the proposed continuation of his PhD: Experimental and the development of inchworm inspired soft robot for space exploration, particularly the Martian environment, testing the robot under real proton, neutron and alpha irradiation, quantification and characterisation of the robot under space radiation.

Poster by A. Lech at the Gordon Research Conference at Stonehill College, Easton, MA, 9 June 2025

DeepTrack2 Logo. (Image by J. Pineda)
DeepTrack2: Microscopy Simulations for Deep Learning
Alex Lech, Mirja Granfors, Benjamin Midtvedt, Jesús Pineda, Harshith Bachimanchi, Carlo Manzo, Giovanni Volpe

Date: 9 June 2025
Time: 16:00-18:00
Place:  Conference Label-Free Approaches to Observe Single Biomolecules for Biophysics and Biotechnology
8-13 June 2025
Stonehill College, Easton, Massachussets

DeepTrack2 is a flexible and scalable Python library designed for simulating microscopy data to generate high-quality synthetic datasets for training deep learning models. It supports a wide range of imaging modalities, including brightfield, fluorescence, darkfield, and holography, allowing users to simulate realistic experimental conditions with ease. Its modular architecture enables users to customize experimental setups, simulate a variety of objects, and incorporate optical aberrations, realistic experimental noise, and other user-defined effects, making it suitable for various research applications. DeepTrack2 is designed to be an accessible tool for researchers in fields that utilize image analysis and deep learning, as it removes the need for labor-intensive manual annotation through simulations. This helps accelerate the development of AI-driven methods for experiments by providing largescale, high-quality data that is often required by deep learning models. DeepTrack2 has already been used for a number of applications in cell tracking, classifications tasks, segmentations and holographic reconstruction. Its flexible and scalable nature enables researchers to simulate a wide array of experimental conditions and scenarios with full control of the features.
DeepTrack2 is available on GitHub, with extensive documentation, tutorials, and an active community for support and collaboration at https://github.com/DeepTrackAI/DeepTrack2.

References:

Digital video microscopy enhanced by deep learning.
Saga Helgadottir, Aykut Argun & Giovanni Volpe.
Optica, volume 6, pages 506-513 (2019).

Quantitative Digital Microscopy with Deep Learning.
Benjamin Midtvedt, Saga Helgadottir, Aykut Argun, Jesús Pineda, Daniel Midtvedt & Giovanni Volpe.
Applied Physics Reviews, volume 8, article number 011310 (2021).

 

Presentation by M. Granfors at EUROMECH Colloquium 656 in Gothenburg, 22 May 2025

Mirja Granfors presenting at the EUROMECH Colloquium. (Photo by A. Lech.)
DeepTrack2: Physics-based Microscopy Simulations for Deep Learning
Mirja Granfors

Date: 22 May 2025
Time: 15:15
Place: Veras Gräsmatta, Gothenburg
Part of the EUROMECH Colloquium 656 Data-Driven Mechanics and Physics of Materials

DeepTrack2 is a flexible and scalable Python library designed to generate physics-based synthetic microscopy datasets for training deep learning models. It supports a wide range of imaging modalities, including brightfield, fluorescence, darkfield, and holography, enabling the creation of synthetic samples that accurately replicate real experimental conditions. Its modular architecture empowers users to customize optical systems, incorporate optical aberrations and noise, simulate diverse objects across various imaging scenarios, and apply image augmentations. DeepTrack2 is accompanied by a dedicated GitHub page, providing extensive documentation, examples, and an active community for support and collaboration: https://github.com/DeepTrackAI/DeepTrack2.