Poster by A. Lech at BNMI 2025, Gothenburg, 20 August 2025

Alex Lech at the BNMI poster session. (Photo by M. Granfors)
DeepTrack2: Microscopy Simulations for Deep Learning
Alex Lech, Mirja Granfors, Benjamin Midtvedt, Jesús Pineda, Harshith Bachimanchi, Carlo Manzo, Giovanni Volpe
BNMI 2025, 19-22 August 2025, Gothenburg, Sweden
Date: 20 August 2025
Time: 15:15-19:00
Place:  Wallenberg Conference Centre

DeepTrack2 is a flexible and scalable Python library designed for simulating microscopy data to generate high-quality synthetic datasets for training deep learning models. It supports a wide range of imaging modalities, including brightfield, fluorescence, darkfield, and holography, allowing users to simulate realistic experimental conditions with ease. Its modular architecture enables users to customize experimental setups, simulate a variety of objects, and incorporate optical aberrations, realistic experimental noise, and other user-defined effects, making it suitable for various research applications. DeepTrack2 is designed to be an accessible tool for researchers in fields that utilize image analysis and deep learning, as it removes the need for labor-intensive manual annotation through simulations. This helps accelerate the development of AI-driven methods for experiments by providing large volumes of data that is often required by deep learning models. DeepTrack2 has already been used for a number of applications in cell tracking, classifications tasks, segmentations and holographic reconstruction. Its flexible and scalable nature enables researchers to simulate a wide array of experimental conditions and scenarios with full control of features and parameters.

DeepTrack2 is available on GitHub, with extensive documentation, tutorials, and an active community for support and collaboration at https://github.com/DeepTrackAI/DeepTrack2.

References:

Digital video microscopy enhanced by deep learning.
Saga Helgadottir, Aykut Argun & Giovanni Volpe.
Optica, volume 6, pages 506-513 (2019).

Quantitative Digital Microscopy with Deep Learning.
Benjamin Midtvedt, Saga Helgadottir, Aykut Argun, Jesús Pineda, Daniel Midtvedt & Giovanni Volpe.
Applied Physics Reviews, volume 8, article number 011310 (2021).

 

Presentation by A. Lech at SPIE-ETAI, San Diego, 5 August 2025

DeepTrack2 Logo. (Image by J. Pineda)
Deeplay: Enhancing PyTorch with Customizable and Reusable Neural Networks
Alex Lech, Mirja Granfors, Benjamin Midtvedt, Jesús Pineda, Harshith Bachimanchi, Carlo Manzo, Giovanni Volpe
Date: 5 August 2025
Time: 12:00 – 12:15 PM
Place: Conv. Ctr. Room 4

Deeplay is a flexible Python library for deep learning that simplifies the definition and optimization of neural networks. It provides an intuitive framework that makes it easy to define and train models. With its modular design, deeplay lets users efficiently build and refine complex neural network architectures by seamlessly integrating reusable components based on PyTorch as well as adding a plethora of functionalities to alter and customize existing models without introducing boilerplate code. Deeplay is accompanied by a dedicated GitHub page, featuring extensive documentation, examples, and an active community for support and collaboration: https://github.com/DeepTrackAI/deeplay.

Poster by A. Lech at the Gordon Research Conference at Stonehill College, Easton, MA, 9 June 2025

DeepTrack2 Logo. (Image by J. Pineda)
DeepTrack2: Microscopy Simulations for Deep Learning
Alex Lech, Mirja Granfors, Benjamin Midtvedt, Jesús Pineda, Harshith Bachimanchi, Carlo Manzo, Giovanni Volpe

Date: 9 June 2025
Time: 16:00-18:00
Place:  Conference Label-Free Approaches to Observe Single Biomolecules for Biophysics and Biotechnology
8-13 June 2025
Stonehill College, Easton, Massachussets

DeepTrack2 is a flexible and scalable Python library designed for simulating microscopy data to generate high-quality synthetic datasets for training deep learning models. It supports a wide range of imaging modalities, including brightfield, fluorescence, darkfield, and holography, allowing users to simulate realistic experimental conditions with ease. Its modular architecture enables users to customize experimental setups, simulate a variety of objects, and incorporate optical aberrations, realistic experimental noise, and other user-defined effects, making it suitable for various research applications. DeepTrack2 is designed to be an accessible tool for researchers in fields that utilize image analysis and deep learning, as it removes the need for labor-intensive manual annotation through simulations. This helps accelerate the development of AI-driven methods for experiments by providing largescale, high-quality data that is often required by deep learning models. DeepTrack2 has already been used for a number of applications in cell tracking, classifications tasks, segmentations and holographic reconstruction. Its flexible and scalable nature enables researchers to simulate a wide array of experimental conditions and scenarios with full control of the features.
DeepTrack2 is available on GitHub, with extensive documentation, tutorials, and an active community for support and collaboration at https://github.com/DeepTrackAI/DeepTrack2.

References:

Digital video microscopy enhanced by deep learning.
Saga Helgadottir, Aykut Argun & Giovanni Volpe.
Optica, volume 6, pages 506-513 (2019).

Quantitative Digital Microscopy with Deep Learning.
Benjamin Midtvedt, Saga Helgadottir, Aykut Argun, Jesús Pineda, Daniel Midtvedt & Giovanni Volpe.
Applied Physics Reviews, volume 8, article number 011310 (2021).

 

Presentation by A. Lech at EUROMECH Colloquium 656 in Gothenburg, 22 May 2025

Alex Lech Granfors presenting at the EUROMECH Colloquium. (Photo by M. Granfors.)
Deeplay: Enhancing PyTorch with Customizable and Reusable Neural Networks
Alex Lech

Date: 22 May 2025
Time: 15:00
Place: Veras Gräsmatta, Gothenburg
Part of the EUROMECH Colloquium 656 Data-Driven Mechanics and Physics of Materials

Deeplay is a Python-based deep learning library that extends PyTorch, addressing limitations in modularity and reusability commonly encountered in neural network development. Built with a core philosophy of modularity and adaptability, Deeplay introduces a system for defining, training, and dynamically modifying neural networks. Unlike traditional PyTorch modules, Deeplay allows users to adjust the properties of submodules post-creation, enabling seamless integration of changes without compromising the compatibility of other components. This flexibility promotes reusability, reduces redundant implementations, and simplifies experimentation with neural architectures. Deeplay’s architecture is organized around a hierarchy of abstractions, spanning from high-level models to individual layers. Each abstraction operates independently of the specifics of lower levels, allowing neural network components to be reconfigured or replaced without requiring foresight during initial design. Key features include a registry-based system for component customization, support for dynamic property modifications, and reusable modules that can be integrated across multiple projects. As a fully compatible superset of PyTorch, Deeplay enhances its functionality with advanced modularity and flexibility while maintaining seamless integration with existing PyTorch workflows. It extends the capabilities of PyTorch Lightning by addressing not only training loop optimization, but also the flexible and dynamic design of model architectures. By combining the familiarity and robustness of PyTorch with enhanced design flexibility, Deeplay empowers developers to efficiently prototype, refine, and deploy neural networks tailored to diverse machine learning challenges. Deeplay is accompanied by a dedicated GitHub page, featuring extensive documentation, examples, and an active community for support and collaboration.