News

Matilda Hellström joins the Soft Matter Lab

(Photo by A. Ciarlo.)
Matilda Hellström joined the Soft Matter Lab on 1 September 2025.

Matilda is a master student in Engineering Physics at Chalmers University of Technology.

During her time at the Soft Matter Lab, she will be working on developing self-supervised deep learning methods for analyzing microscopy data.

Poster by A. Lech at BNMI 2025, Gothenburg, 20 August 2025

Alex Lech at the BNMI poster session. (Photo by M. Granfors)
DeepTrack2: Microscopy Simulations for Deep Learning
Alex Lech, Mirja Granfors, Benjamin Midtvedt, Jesús Pineda, Harshith Bachimanchi, Carlo Manzo, Giovanni Volpe
BNMI 2025, 19-22 August 2025, Gothenburg, Sweden
Date: 20 August 2025
Time: 15:15-19:00
Place:  Wallenberg Conference Centre

DeepTrack2 is a flexible and scalable Python library designed for simulating microscopy data to generate high-quality synthetic datasets for training deep learning models. It supports a wide range of imaging modalities, including brightfield, fluorescence, darkfield, and holography, allowing users to simulate realistic experimental conditions with ease. Its modular architecture enables users to customize experimental setups, simulate a variety of objects, and incorporate optical aberrations, realistic experimental noise, and other user-defined effects, making it suitable for various research applications. DeepTrack2 is designed to be an accessible tool for researchers in fields that utilize image analysis and deep learning, as it removes the need for labor-intensive manual annotation through simulations. This helps accelerate the development of AI-driven methods for experiments by providing large volumes of data that is often required by deep learning models. DeepTrack2 has already been used for a number of applications in cell tracking, classifications tasks, segmentations and holographic reconstruction. Its flexible and scalable nature enables researchers to simulate a wide array of experimental conditions and scenarios with full control of features and parameters.

DeepTrack2 is available on GitHub, with extensive documentation, tutorials, and an active community for support and collaboration at https://github.com/DeepTrackAI/DeepTrack2.

References:

Digital video microscopy enhanced by deep learning.
Saga Helgadottir, Aykut Argun & Giovanni Volpe.
Optica, volume 6, pages 506-513 (2019).

Quantitative Digital Microscopy with Deep Learning.
Benjamin Midtvedt, Saga Helgadottir, Aykut Argun, Jesús Pineda, Daniel Midtvedt & Giovanni Volpe.
Applied Physics Reviews, volume 8, article number 011310 (2021).

 

Microscopic Geared Metamachines published in Nature Communications

Top: single gear; Bottom: the second gear from the right has an optical metamaterial that react to laserlight and makes the gear move. All gears are made in silica directly on a chip. Each gear is about 0.016 mm in diameter. (Image by G. Wang)

Microscopic Geared Metamachines
Gan Wang, Marcel Rey, Antonio Ciarlo, Mohanmmad Mahdi Shanei, Kunli Xiong, Giuseppe Pesce, Mikael Käll and Giovanni Volpe
Nature Communications 16, 7767 (2025)
doi: 10.1038/s41467-025-62869-6
arXiv: 2409.17284

The miniaturization of mechanical machines is critical for advancing nanotechnology and reducing device footprints. Traditional efforts to downsize gears and micromotors have faced limitations at around 0.1 mm for over thirty years due to the complexities of constructing drives and coupling systems at such scales. Here, we present an alternative approach utilizing optical metasurfaces to locally drive microscopic machines, which can then be fabricated using standard lithography techniques and seamlessly integrated on the chip, achieving sizes down to tens of micrometers with movements precise to the sub-micrometer scale. As a proof of principle, we demonstrate the construction of microscopic gear trains powered by a single driving gear with a metasurface activated by a plane light wave. Additionally, we develop a versatile pinion and rack micromachine capable of transducing rotational motion, performing periodic motion, and controlling microscopic mirrors for light deflection. Our on-chip fabrication process allows for straightforward parallelization and integration. Using light as a widely available and easily controllable energy source, these miniaturized metamachines offer precise control and movement, unlocking new possibilities for micro- and nanoscale systems.

After the article was published, it was reported by many media outlets, University of Gothenburg, New Scientist, Optics.org, Phys.org, ScienceDaily, Discover Magazine, among others.

Featured in:
GU: Light powered motor fits inside a strand of hair.
New Scientist: Microscopic gears powered by light could be used to make tiny machines and video on Youtube.
Optics.org: University of Gothenburg makes micron-scale light-powered gears
Phys.org: New light-powered gear fits inside a strand of hair
ScienceDaily: Scientists build micromotors smaller than a human hair
Discover Magazine: The Smallest Motors in History Can Fit Inside a Strand of Hair
@DrBenMiles: Scientists Create Micro Machines Powered by Light

Presentation by M. Granfors at BNMI 2025, Gothenburg, 20 August 2025

DeepTrack2 Logo. (Image by J. Pineda)
DeepTrack2: physics-based microscopy simulations for deep learning
Mirja Granfors, Alex Lech, Benjamin Midtvedt, Jesús Pineda, Harshith Bachimanchi, Carlo Manzo, and Giovanni Volpe
BNMI 2025, 19-22 August 2025, Gothenburg, Sweden
Date: 20 August 2025
Time: 15:00 – 15:15
Place:  Wallenberg Conference Centre

DeepTrack2 is a flexible and scalable Python library designed to generate physics-based synthetic microscopy datasets for training deep learning models. It supports a wide range of imaging modalities, including brightfield, fluorescence, darkfield, and holography, enabling the creation of synthetic samples that accurately replicate real experimental conditions. Its modular architecture empowers users to customize optical systems, incorporate optical aberrations and noise, simulate diverse objects across various imaging scenarios, and apply image augmentations. DeepTrack2 is accompanied by a dedicated GitHub page, providing extensive documentation, examples, and an active community for support and collaboration: https://github.com/DeepTrackAI/DeepTrack2.

Roadmap for animate matter published on Journal of Physics: Condensed Matter

The three properties of animacy. The three polar plots sketch our jointly perceived level of development for each principle of animacy (i.e. activity, adaptiveness and autonomy) for each system discussed in this roadmap. The polar coordinate represents the various systems, while the radial coordinate represents the level of development (from low to high) that each system shows in the principle of each polar plot. Ideally, within a generation, all systems will fill these polar plots to show high levels in each of the three attributes of animacy. For now, only biological materials (not represented here) can be considered fully animated. (Image from the manuscript, adapted.)
Roadmap for animate matter
Giorgio Volpe, Nuno A M Araújo, Maria Guix, Mark Miodownik, Nicolas Martin, Laura Alvarez, Juliane Simmchen, Roberto Di Leonardo, Nicola Pellicciotta, Quentin Martinet, Jérémie Palacci, Wai Kit Ng, Dhruv Saxena, Riccardo Sapienza, Sara Nadine, João F Mano, Reza Mahdavi, Caroline Beck Adiels, Joe Forth, Christian Santangelo, Stefano Palagi, Ji Min Seok, Victoria A Webster-Wood, Shuhong Wang, Lining Yao, Amirreza Aghakhani, Thomas Barois, Hamid Kellay, Corentin Coulais, Martin van Hecke, Christopher J Pierce, Tianyu Wang, Baxi Chong, Daniel I Goldman, Andreagiovanni Reina, Vito Trianni, Giovanni Volpe, Richard Beckett, Sean P Nair, Rachel Armstrong
Journal of Physics: Condensed Matter 37, 333501 (2025)
arXiv: 2407.10623
doi: 10.1088/1361-648X/adebd3

Humanity has long sought inspiration from nature to innovate materials and devices. As science advances, nature-inspired materials are becoming part of our lives. Animate materials, characterized by their activity, adaptability, and autonomy, emulate properties of living systems. While only biological materials fully embody these principles, artificial versions are advancing rapidly, promising transformative impacts in the circular economy, health and climate resilience within a generation. This roadmap presents authoritative perspectives on animate materials across different disciplines and scales, highlighting their interdisciplinary nature and potential applications in diverse fields including nanotechnology, robotics and the built environment. It underscores the need for concerted efforts to address shared challenges such as complexity management, scalability, evolvability, interdisciplinary collaboration, and ethical and environmental considerations. The framework defined by classifying materials based on their level of animacy can guide this emerging field to encourage cooperation and responsible development. By unravelling the mysteries of living matter and leveraging its principles, we can design materials and systems that will transform our world in a more sustainable manner.

Jun Yi Chen joins the Soft Matter Lab

(Photo by A. Ciarlo
Jun Yi Chen, master student in Chemistry at the University of Münster, started his Erasmus internship at the Physics Department of Gothenburg University on 11 August 2025.

Jun Yi holds a bachelor’s degree in Chemistry from the University of Münster.

During his internship at the Soft Matter Lab, he will investigate the interactions of polymer-coated silica microparticles under various stimuli using optical tweezers.

Mirja Granfors received the Best Early-Career Researcher Presentation Award at ETAI 2025, San Diego

(Photo by M. Granfors.)

Mirja Granfors received the Best Early Career Researcher Presentation Award at Emerging Topics in Artificial Intelligence (ETAI) 2025 held in San Diego, from 3 to 7 August 2025.

The award, which includes a certificate, a cash prize of $300, and a T-shirt, is presented by the organizers of the conference in collaboration with SPIE Optics + Photonics.

Mirja was awarded the prize for her presentation titled “DeepTrack2: physics-based microscopy simulations for deep learning”. Below is the full abstract of her presentation:

DeepTrack2 is a flexible and scalable Python library designed to generate physics-based synthetic microscopy datasets for training deep learning models. It supports a wide range of imaging modalities, including brightfield, fluorescence, darkfield, and holography, enabling the creation of synthetic samples that accurately replicate real experimental conditions. Its modular architecture empowers users to customize optical systems, incorporate optical aberrations and noise, simulate diverse objects across various imaging scenarios, and apply image augmentations. DeepTrack2 is accompanied by a dedicated GitHub page, providing extensive documentation, examples, and an active community for support and collaboration: https://github.com/DeepTrackAI/DeepTrack2.

Hari Prakash received the Best Early-Career Researcher Presentation Award at ETAI 2025, San Diego

Hari Prakash received the Best Early Career Researcher Presentation Award at Emerging Topics in Artificial Intelligence (ETAI) 2025 held in San Diego, from 3 to 7 August 2025.

The award, which includes a certificate, a cash prize of $300, and a T-shirt, is presented by the organisers of the conference in collaboration with SPIE Optics + Photonics.

Hari was awarded the prize for his presentation titled “Inchworm-Inspired Soft Robot with Groove-Guided Locomotion”. Below is the full abstract of her presentation:

Soft robots require directional control to navigate complex terrains. However, achieving such control often requires multiple actuators, which increases mechanical complexity, complicates control systems, and raises energy consumption. Here, we introduce an inchworm-inspired soft robot whose locomotion direction is controlled passively by patterned substrates. The robot employs a single rolled dielectric elastomer actuator, while groove patterns on a 3D-printed substrate guide its alignment and trajectory. Through systematic experiments, we demonstrate that varying groove angles enables precise control of locomotion direction without the need for complex actuation strategies. This groove-guided approach reduces energy consumption, simplifies robot design, and expands the applicability of bio-inspired soft robots in fields such as search and rescue, pipe inspection, and planetary exploration.

Soft Matter Lab members present at SPIE Optics+Photonics conference in San Diego, 3-7 August 2025

The Soft Matter Lab participates to the SPIE Optics+Photonics conference in San Diego, CA, USA, 3-7 August 2025, with the presentations listed below.

Giovanni Volpe, who serves as Symposium Chair for the SPIE Optics+Photonics Congress in 2025, is a coauthor of the following invited presentations:

Giovanni Volpe will also be the reference presenter of the following Poster contributions:

Presentation by M. Granfors at SPIE-ETAI, San Diego, 7 August 2025

GAUDI leverages a hierarchical graph-convolutional variational autoencoder architecture, where an encoder progressively compresses the graph into a low-dimensional latent space, and a decoder reconstructs the graph from the latent embedding. (Image by M. Granfors and J. Pineda.)
Global graph features unveiled by unsupervised geometric deep learning
Mirja Granfors, Jesús Pineda, Blanca Zufiria Gerbolés, Joana Pereira, Carlo Manzo, and Giovanni Volpe
Date: 7 August 2025
Time: 2:45 PM – 3:00 PM
Place: Conv. Ctr. Room 4

Graphs are used to model complex relationships, such as interactions between particles or connections between brain regions. The structural complexity and variability of graphs pose challenges to their efficient analysis and classification. Here, we propose GAUDI (Graph Autoencoder Uncovering Descriptive Information), a graph autoencoder that addresses these challenges. GAUDI is trained in an unsupervised manner to capture the most critical parameters of graphs in the latent space, thereby enabling the extraction of essential parameters characterizing the graphs. We demonstrate the performance of GAUDI across diverse graph data originating from complex systems, including the estimation of the parameters of Watts-Strogatz graphs, the classification of protein assembly structures from single-molecule localization microscopy data, the analysis of collective behaviors, and correlations between brain connections and age. This approach offers a robust framework for efficiently analyzing and interpreting complex graph data, facilitating the extraction of meaningful patterns and insights across a wide range of applications.