News

Invited Talk by G. Volpe at ELS XXI, Milazzo, Italy, 27 June 2025.

DeepTrack 2 Logo. (Image from DeepTrack 2 Project)
What can deep learning do for electromagnetic light scattering?
Giovanni Volpe
Electromagnetic and Light Scattering (ELS) XXI
Date: 27 June 2025
Time: 9:00
Place: Milazzo, Italy

Electromagnetic light scattering underpins a wide range of phenomena in both fundamental and applied research, from characterizing complex materials to tracking particles and cells in microfluidic devices. Video microscopy, in particular, has become a powerful method for studying scattering processes and extracting quantitative information. Yet, conventional algorithmic approaches for analyzing scattering data often prove cumbersome, computationally expensive, and highly specialized.
Recent advances in deep learning offer a compelling alternative. By leveraging data-driven models, we can automate the extraction of scattering characteristics with unprecedented speed and accuracy—uncovering insights that classical techniques might miss or require substantial computation to achieve. Despite these advantages, deep-learning-based tools remain underutilized in light-scattering research, largely because of the steep learning curve required to design and train such models.
To address these challenges, we have developed a user-friendly software platform (DeepTrack, now in version 2.2) that simplifies the entire workflow of deep-learning applications in digital microscopy. DeepTrack enables straightforward creation of custom datasets, network architectures, and training pipelines specifically tailored for quantitative scattering analyses. In this talk, I will discuss how emerging deep-learning methods can be combined with advanced imaging technologies to push the boundaries of electromagnetic light scattering research—reducing computational overhead, improving accuracy, and ultimately broadening access to powerful, data-driven solutions.

John Tember joins the Soft Matter Lab

(Photo by A. Ciarlo.)
John Tember joined the Soft Matter Lab on 15 June 2025.

John is a PhD student in Physics at the University of Gothenburg.

He holds a Master’s degree in Media Technology and Engineering from Linköping University.

During his time at the Soft Matter Lab, he will work on data-driven life science, with a focus on developing and analyzing 3D models derived from lightsheet microscopy.

Hari Prakash presented his half-time seminar on 10th June 2025

Half-time seminar in Nexus, with Prof. Bernhard Mehlig (examiner) and soft matter group. (Photo by A. Callegari.)
Hari Prakash completed the first half of his doctoral studies and he defended his half-time on the 10th of June 2025.

The presentation titled “Soft Robotic Platforms for Variable Conditions : From Adaptive Locomotion to Space Exploration” was held in hybrid form, both with part of the audience in Nexus room and through Zoom. The half-time consisted of a presentation about his past and planned projects, followed by a discussion and questions proposed by his opponent, Professor Bernhard Mehlig.

The presentation started with a short background introduction to soft robotics and bio-inpired soft robotics, followed by soft actuators used in the field of soft robotics and focused on the soft actuator used throughout his projects. He further then proceeded to introduce his first project and paper (which is under preparation) , “Inchworm-Inspired Soft Robot with Groove-Guided Locomotion,” and finally proceeded to introduce his second project “Soft Inchworm-Inspired Robot Fault-Tolerant Artificial Muscles for Planetary Exploration – Simulation of fault-tolerant artificial muscles under proton, neutron, and alpha irradiation”, a project in collaboration with the European Space Agency (ESA).

In the last section, he outlined the proposed continuation of his PhD: Experimental and the development of inchworm inspired soft robot for space exploration, particularly the Martian environment, testing the robot under real proton, neutron and alpha irradiation, quantification and characterisation of the robot under space radiation.

Poster by A. Lech at the Gordon Research Conference at Stonehill College, Easton, MA, 9 June 2025

DeepTrack2 Logo. (Image by J. Pineda)
DeepTrack2: Microscopy Simulations for Deep Learning
Alex Lech, Mirja Granfors, Benjamin Midtvedt, Jesús Pineda, Harshith Bachimanchi, Carlo Manzo, Giovanni Volpe

Date: 9 June 2025
Time: 16:00-18:00
Place:  Conference Label-Free Approaches to Observe Single Biomolecules for Biophysics and Biotechnology
8-13 June 2025
Stonehill College, Easton, Massachussets

DeepTrack2 is a flexible and scalable Python library designed for simulating microscopy data to generate high-quality synthetic datasets for training deep learning models. It supports a wide range of imaging modalities, including brightfield, fluorescence, darkfield, and holography, allowing users to simulate realistic experimental conditions with ease. Its modular architecture enables users to customize experimental setups, simulate a variety of objects, and incorporate optical aberrations, realistic experimental noise, and other user-defined effects, making it suitable for various research applications. DeepTrack2 is designed to be an accessible tool for researchers in fields that utilize image analysis and deep learning, as it removes the need for labor-intensive manual annotation through simulations. This helps accelerate the development of AI-driven methods for experiments by providing largescale, high-quality data that is often required by deep learning models. DeepTrack2 has already been used for a number of applications in cell tracking, classifications tasks, segmentations and holographic reconstruction. Its flexible and scalable nature enables researchers to simulate a wide array of experimental conditions and scenarios with full control of the features.
DeepTrack2 is available on GitHub, with extensive documentation, tutorials, and an active community for support and collaboration at https://github.com/DeepTrackAI/DeepTrack2.

References:

Digital video microscopy enhanced by deep learning.
Saga Helgadottir, Aykut Argun & Giovanni Volpe.
Optica, volume 6, pages 506-513 (2019).

Quantitative Digital Microscopy with Deep Learning.
Benjamin Midtvedt, Saga Helgadottir, Aykut Argun, Jesús Pineda, Daniel Midtvedt & Giovanni Volpe.
Applied Physics Reviews, volume 8, article number 011310 (2021).

 

Mirja Granfors won best early-career researcher presentation award at AnDi+ 2025, Gothenburg

Mirja Granfors receives the award. From left to right: Giorgio Volpe, Mirja Granfors, Wojciech Chachólski, Arrate Muñoz-Barrutia, Gorka Muñoz-Gil, Carlo Manzo. (Photo by A. Callegari.)

Mirja Granfors won the best early career researcher presentation award at AnDi+ 2025 workshop (AI for Bioimaging Beyond Trajectory Analysis) held in Gothenburg, from 2 June – 5 June 2025.

The award, consisting of a certificate and a cash prize of 250€, is sponsored by Nanophotonics.

Mirja was awarded the prize for her presentation titled “DeepTrack2: Physics-based Microscopy Simulations for Deep Learning & Deeplay: Enhancing PyTorch with Customizable and Reusable Neural Networks”. In her presentation, she presented the Python libraries DeepTrack2 and Deeplay, both developed by the Soft Matter Lab to support AI-driven microscopy.

DeepTrack2 is a flexible and scalable Python library designed to generate physics-based synthetic microscopy datasets for training deep learning models. It supports a wide range of imaging modalities, including brightfield, fluorescence, darkfield, and holography, enabling the creation of synthetic samples that accurately replicate real experimental conditions. Its modular architecture empowers users to customize optical systems, incorporate optical aberrations and noise, simulate diverse objects across various imaging scenarios, and apply image augmentations.

Deeplay is a flexible Python library for deep learning that simplifies the definition and optimization of neural networks. It provides an intuitive framework that makes it easy to define and train models. With its modular design, Deeplay enables users to efficiently build and refine complex neural network architectures by seamlessly integrating reusable components.

An in vivo mimetic liver-lobule-chip (LLoC) for stem cell maturation, and zonation of hepatocyte-like cells on chip published in Lab on a Chip

The image shows a liver-lobule-chip (LLoC) with 21 artificial lobules mimicking liver microarchitecture. Its PDMS design supports diffusion-based perfusion, shear stress, and nutrient gradients and enables iPSC-derived hepatic maturation and spatially organized, zonated function in 3D. (Image by C. Beck Adiels)
An in vivo mimetic liver-lobule-chip (LLoC) for stem cell maturation, and zonation of hepatocyte-like cells on chip
Philip Dalsbecker, Siiri Suominen, Muhammad Asim Faridi, Reza Mahdavi, Julia Johansson, Charlotte Hamngren Blomqvist, Mattias Goksör, Katriina Aalto-Setälä, Leena E. Viiri and Caroline B. Adiels
Lab on a Chip 25, 4328 – 4344 (2025)
doi: 10.1039/D4LC00509K

In vitro cell culture models play a crucial role in preclinical drug discovery. To achieve optimal culturing environments and establish physiologically relevant organ-specific conditions, it is imperative to replicate in vivo scenarios when working with primary or induced pluripotent cell types. However, current approaches to recreating in vivo conditions and generating relevant 3D cell cultures still fall short. In this study, we validate a liver-lobule-chip (LLoC) containing 21 artificial liver lobules, each representing the smallest functional unit of the human liver. The LLoC facilitates diffusion-based perfusion via sinusoid-mimetic structures, providing physiologically relevant shear stress exposure and radial nutrient concentration gradients within each lobule. We demonstrate the feasibility of long term cultures (up to 14 days) of viable and functional HepG2 cells in a 3D discoid tissue structure, serving as initial proof of concept. Thereafter, we successfully differentiate sensitive, human induced pluripotent stem cell (iPSC)-derived cells into hepatocyte-like cells over a period of 20 days on-chip, exhibiting advancements in maturity compared to traditional 2D cultures. Further, hepatocyte-like cells cultured in the LLoC exhibit zonated protein expression profiles, indicating the presence of metabolic gradients characteristic of liver lobules. Our results highlight the suitability of the LLoC for long-term discoid tissue cultures, specifically for iPSCs, and their differentiation in a perfused environment. We envision the LLoC as a starting point for more advanced in vitro models, allowing for the combination of multiple liver cell types to create a comprehensive liver model for disease-onchip studies. Ultimately, when combined with stem cell technology, the LLoC offers a promising and robust on-chip liver model that serves as a viable alternative to primary hepatocyte cultures—ideally suited for preclinical drug screening and personalized medicine applications.

Presentation by M. Granfors at EUROMECH Colloquium 656 in Gothenburg, 22 May 2025

Mirja Granfors presenting at the EUROMECH Colloquium. (Photo by A. Lech.)
DeepTrack2: Physics-based Microscopy Simulations for Deep Learning
Mirja Granfors

Date: 22 May 2025
Time: 15:15
Place: Veras Gräsmatta, Gothenburg
Part of the EUROMECH Colloquium 656 Data-Driven Mechanics and Physics of Materials

DeepTrack2 is a flexible and scalable Python library designed to generate physics-based synthetic microscopy datasets for training deep learning models. It supports a wide range of imaging modalities, including brightfield, fluorescence, darkfield, and holography, enabling the creation of synthetic samples that accurately replicate real experimental conditions. Its modular architecture empowers users to customize optical systems, incorporate optical aberrations and noise, simulate diverse objects across various imaging scenarios, and apply image augmentations. DeepTrack2 is accompanied by a dedicated GitHub page, providing extensive documentation, examples, and an active community for support and collaboration: https://github.com/DeepTrackAI/DeepTrack2.

Presentation by A. Lech at EUROMECH Colloquium 656 in Gothenburg, 22 May 2025

Alex Lech Granfors presenting at the EUROMECH Colloquium. (Photo by M. Granfors.)
Deeplay: Enhancing PyTorch with Customizable and Reusable Neural Networks
Alex Lech

Date: 22 May 2025
Time: 15:00
Place: Veras Gräsmatta, Gothenburg
Part of the EUROMECH Colloquium 656 Data-Driven Mechanics and Physics of Materials

Deeplay is a Python-based deep learning library that extends PyTorch, addressing limitations in modularity and reusability commonly encountered in neural network development. Built with a core philosophy of modularity and adaptability, Deeplay introduces a system for defining, training, and dynamically modifying neural networks. Unlike traditional PyTorch modules, Deeplay allows users to adjust the properties of submodules post-creation, enabling seamless integration of changes without compromising the compatibility of other components. This flexibility promotes reusability, reduces redundant implementations, and simplifies experimentation with neural architectures. Deeplay’s architecture is organized around a hierarchy of abstractions, spanning from high-level models to individual layers. Each abstraction operates independently of the specifics of lower levels, allowing neural network components to be reconfigured or replaced without requiring foresight during initial design. Key features include a registry-based system for component customization, support for dynamic property modifications, and reusable modules that can be integrated across multiple projects. As a fully compatible superset of PyTorch, Deeplay enhances its functionality with advanced modularity and flexibility while maintaining seamless integration with existing PyTorch workflows. It extends the capabilities of PyTorch Lightning by addressing not only training loop optimization, but also the flexible and dynamic design of model architectures. By combining the familiarity and robustness of PyTorch with enhanced design flexibility, Deeplay empowers developers to efficiently prototype, refine, and deploy neural networks tailored to diverse machine learning challenges. Deeplay is accompanied by a dedicated GitHub page, featuring extensive documentation, examples, and an active community for support and collaboration.

Invited Seminar by G. Volpe at Cognitive and Behavior Changes in Parkinson’s Disease and Parkinsonism: Advances and Challenges, Santa Maria di Leuca, Italy, 21 May 2025

Braph 2 Logo. (Image from the Braph 2 Project)
The Role of Artificial Intelligence in Advanced Neuroimaging Analysis
Giovanni Volpe
Cognitive and Behavior Changes in Parkinson’s Disease and Parkinsonism: Advances and Challenges
Date: 21 May 2025
Time: 11:50
Place: Tricase, Santa Maria di Leuca, Italy