Roadmap on Deep Learning for Microscopy published in Journal of Physics: Photonics

Spatio-temporal spectrum diagram of microscopy techniques and their applications. (Image by the Authors of the manuscript.)
Roadmap on Deep Learning for Microscopy
Giovanni Volpe, Carolina Wählby, Lei Tian, Michael Hecht, Artur Yakimovich, Kristina Monakhova, Laura Waller, Ivo F. Sbalzarini, Christopher A. Metzler, Mingyang Xie, Kevin Zhang, Isaac C.D. Lenton, Halina Rubinsztein-Dunlop, Daniel Brunner, Bijie Bai, Aydogan Ozcan, Daniel Midtvedt, Hao Wang, Nataša Sladoje, Joakim Lindblad, Jason T. Smith, Marien Ochoa, Margarida Barroso, Xavier Intes, Tong Qiu, Li-Yu Yu, Sixian You, Yongtao Liu, Maxim A. Ziatdinov, Sergei V. Kalinin, Arlo Sheridan, Uri Manor, Elias Nehme, Ofri Goldenberg, Yoav Shechtman, Henrik K. Moberg, Christoph Langhammer, Barbora Špačková, Saga Helgadottir, Benjamin Midtvedt, Aykut Argun, Tobias Thalheim, Frank Cichos, Stefano Bo, Lars Hubatsch, Jesus Pineda, Carlo Manzo, Harshith Bachimanchi, Erik Selander, Antoni Homs-Corbera, Martin Fränzl, Kevin de Haan, Yair Rivenson, Zofia Korczak, Caroline Beck Adiels, Mite Mijalkov, Dániel Veréb, Yu-Wei Chang, Joana B. Pereira, Damian Matuszewski, Gustaf Kylberg, Ida-Maria Sintorn, Juan C. Caicedo, Beth A Cimini, Muyinatu A. Lediju Bell, Bruno M. Saraiva, Guillaume Jacquemet, Ricardo Henriques, Wei Ouyang, Trang Le, Estibaliz Gómez-de-Mariscal, Daniel Sage, Arrate Muñoz-Barrutia, Ebba Josefson Lindqvist, Johanna Bergman
Journal of Physics: Photonics 8, 012501 (2026)
arXiv: 2303.03793
doi: 10.1088/2515-7647/ae0fd1

Through digital imaging, microscopy has evolved from primarily being a means for visual observation of life at the micro- and nano-scale, to a quantitative tool with ever-increasing resolution and throughput. Artificial intelligence, deep neural networks, and machine learning (ML) are all niche terms describing computational methods that have gained a pivotal role in microscopy-based research over the past decade. This Roadmap encompasses key aspects of how ML is applied to microscopy image data, with the aim of gaining scientific knowledge by improved image quality, automated detection, segmentation, classification and tracking of objects, and efficient merging of information from multiple imaging modalities. We aim to give the reader an overview of the key developments and an understanding of possibilities and limitations of ML for microscopy. It will be of interest to a wide cross-disciplinary audience in the physical sciences and life sciences.

Yu-Wei Chang defended his PhD thesis on January 23rd, 2026. Congrats!

Cover of the PhD thesis. (Image by Hula King, https://www.behance.net/hulaking)
Yu-Wei Chang defended his PhD thesis on January 23rd, 2026. Congrats!
The defense will take place in SB-H7 lecture hall, SB-Building, Institutionen för fysik, Johanneberg Campus, Göteborg, at 13:00.

Title: A Unified Software-Generating Framework for Biological Data Analysis

Abstract: Biological data analysis relies heavily on software, but as projects grow it becomes hard to keep code, interfaces, and tests aligned, and to reuse methods without rewriting them. This thesis presents Genesis, which generates runnable modules, GUIs, and unit tests from a single human-readable .gen.m description of each analysis component. By maintaining a central library of these descriptions, analyses can be recombined for new questions while staying consistent. Four studies across neuroimaging, light-sheet microscopy, and plant Raman spectroscopy show the framework is reusable and extensible across domains.

Thesis: http://hdl.handle.net/2077/90289

Supervisor: Giovanni Volpe (Main), Caroline Beck Adiels
Examiner: Raimund Feifel
Opponent: Arvind Kumar
Committee: Wojciech Chachólski, Rita Almeida, Paolo Vinai
Alternate board member: Mohsen Mirkhalaf

Yu-Wei Chang nailed his PhD thesis on January 7th, 2026. Congrats!

Thesis nailing by Yu-Wei Chang. (Photo by C. Khanolkar.)
Yu-Wei Chang nailed his PhD thesis, A Unified Software-Generating Framework for Biological Data Analysis, on January 7th, 2026. Congrats!

The nailing took place in Universitetsbyggnaden i Vasaparken, Universitetsplatsen 1, Göteborg, at 13:30.

In Swedish academia, “nailing” (spikning) is the formal public announcement and publication of a doctoral thesis. It happens weeks before the defence so that the public has time to read the thesis in advance and prepare questions for the defence. In addition to the physical nailing, the thesis is also published electronically (e-spikning) via GUPEA.

Yu-Wei will defend his thesis on 23 January at 13:00 in SB-H7 lecture hall, SB-Building, Institutionen för fysik, Johanneberg Campus, Göteborg.

Thesis (GUPEA handle): http://hdl.handle.net/2077/90289

Video‐rate tunable colour electronic paper with human resolution published in Nature

High-resolution display of “The Kiss” on Retina E-Paper vs. iPhone 15: Photographs comparing the display of “The Kiss” on an iPhone 15 and Retina E-paper. The surface area of the Retina E-paper is ~ 1/4000 times smaller than the iPhone 15. (Image by the Authors of the manuscript.)
Video‐rate tunable colour electronic paper with human resolution
Ade Satria Saloka Santosa, Yu-Wei Chang, Andreas B. Dahlin, Lars Osterlund, Giovanni Volpe, Kunli Xiong
Nature 646, 1089-1095 (2025)
arXiv: 2502.03580
doi: 10.1038/s41586-025-09642-3

As demand for immersive experiences grows, displays are moving closer to the eye with smaller sizes and higher resolutions. However, shrinking pixel emitters reduce intensity, making them harder to perceive. Electronic Papers utilize ambient light for visibility, maintaining optical contrast regardless of pixel size, but cannot achieve high resolution. We show electrically tunable meta-pixels down to ~560 nm in size (>45,000 PPI) consisting of WO3 nanodiscs, allowing one-to-one pixel-photodetector mapping on the retina when the display size matches the pupil diameter, which we call Retina Electronic Paper. Our technology also supports video display (25 Hz), high reflectance (~80%), and optical contrast (~50%), which will help create the ultimate virtual reality display.

Workshop by Y.-W. Chang at NEMES 2025, Gothenburg, 26 September 2025

Massimiliano Passaretti (left) and Yu-Wei Chang (right) at NEME 2025. (Photo courtesy of Clarion Hotel Draken.)
Graph theory and deep learning pipelines
Yu-Wei Chang, Massimiliano Passaretti
NEMES 2025, 24-26 September, 2025
Date: 25 September 2025
Time: 12:45 – 14:00
Place: Clarion Hotel Draken

This workshop begins with a practical introduction to graph theory, then guides participants through BRAPH 2 to build connectomes, compute graph measures, and run group comparisons, followed by a hands-on deep-learning pipeline. It demonstrates a unified GUI/command-line workflow, a unique architecture of BRAPH 2, helping participants move smoothly from the GUI to scripts. This workshop also guides participants to reproduce multiplex and deep-learning results on their computers from the BRAPH 2 bioRxiv preprint.
 

Presentation by Y.-W. Chang at NEMES 2025, Gothenburg, 26 September 2025

From images to graphs, this plenary shows how parcellations and tractography become connectomes and how network analysis reveals brain-network signatures. (Image by Y.-W. Chang.)
Network analysis of neuroimaging data, and deep learning pipelines
Yu-Wei Chang
NEMES 2025, 24-26 September, 2025
Date: 25 September 2025
Time: 09:00 – 09:45
Place: Clarion Hotel Draken

This plenary presents a practical framework for analysing neuroimaging data with network science and deep learning. It moves from modality-specific preprocessing to graph construction (single-layer and multiplex), then covers core graph measures, group inference, and brain-surface visualization, highlighting recent work from Associate Professor Joana B. Pereira’s group (Department of Clinical Neuroscience, Karolinska Institutet). It also introduces deep-learning pipelines for neuroimaging data: reservoir-computing memory capacity analysis, GapNet for handling missing data, and a robust feature-attribution method combined with SNP (single nucleotide polymorphism) information. The plenary concludes with the BRAPH 2 framework, which supports these pipelines and extends to other ongoing projects (e.g., light-sheet microscopy, Raman spectroscopy).
 

Soft Matter Lab members present at SPIE Optics+Photonics conference in San Diego, 3-7 August 2025

The Soft Matter Lab participates to the SPIE Optics+Photonics conference in San Diego, CA, USA, 3-7 August 2025, with the presentations listed below.

Giovanni Volpe, who serves as Symposium Chair for the SPIE Optics+Photonics Congress in 2025, is a coauthor of the following invited presentations:

Giovanni Volpe will also be the reference presenter of the following Poster contributions:

Presentation by Y.-W. Chang at SPIE-ETAI, San Diego, 6 August, 2025

BRAPH 2 Genesis enables swift creation of custom, reproducible software distributions—tackling the growing complexity of neuroscience by streamlining analysis across diverse data types and workflows. (Image by B. Zufiria-Gerbolés and Y.-W. Chang.)
BRAPH 2: a flexible, open-source, reproducible, community-oriented, easy-to-use framework for replicable network analysis in neurosciences
Yu-Wei Chang, Blanca Zufiria Gerbolés, Joana B Pereira, Giovanni Volpe
Date: 6 August 2023
Time: 11:00 AM PDT

As network analyses in neuroscience continue to grow in both complexity and size, flexible methods are urgently needed to provide unbiased, reproducible insights into brain function. BRAPH 2 is a versatile, open-source framework that meets this challenge by offering streamlined workflows for advanced statistical models and deep learning in a community-oriented environment. Through its Genesis compiler, users can build specialized distributions with custom pipelines, ensuring flexibility and scalability across diverse research domains. These powerful capabilities will ensure reproducibility and accelerate discoveries in neuroscience.

 

Deep-Learning Investigation of Vibrational Raman Spectra for Plant-Stress Analysis on ArXiv

In this work, we present an unsupervised deep learning framework using Variational Autoencoders (VAEs) to decode stress-specific biomolecular fingerprints directly from Raman spectral data across multiple plant species and genotypes. (Image by the Authors of the manuscript. A part of the image was designed using Biorender.com.)
From Spectra to Stress: Unsupervised Deep Learning for Plant Health Monitoring
Anoop C. Patil, Benny Jian Rong Sng, Yu-Wei Chang, Joana B. Pereira, Chua Nam-Hai, Rajani Sarojam, Gajendra Pratap Singh, In-Cheol Jang, and Giovanni Volpe
ArXiv: 2507.15772

Detecting stress in plants is crucial for both open-farm and controlled-environment agriculture. Biomolecules within plants serve as key stress indicators, offering vital markers for continuous health monitoring and early disease detection. Raman spectroscopy provides a powerful, non-invasive means to quantify these biomolecules through their molecular vibrational signatures. However, traditional Raman analysis relies on customized data-processing workflows that require fluorescence background removal and prior identification of Raman peaks of interest-introducing potential biases and inconsistencies. Here, we introduce DIVA (Deep-learning-based Investigation of Vibrational Raman spectra for plant-stress Analysis), a fully automated workflow based on a variational autoencoder. Unlike conventional approaches, DIVA processes native Raman spectra-including fluorescence backgrounds-without manual preprocessing, identifying and quantifying significant spectral features in an unbiased manner. We applied DIVA to detect a range of plant stresses, including abiotic (shading, high light intensity, high temperature) and biotic stressors (bacterial infections). By integrating deep learning with vibrational spectroscopy, DIVA paves the way for AI-driven plant health assessment, fostering more resilient and sustainable agricultural practices.

BRAPH 2: a flexible, open-source, reproducible, community-oriented, easy-to-use framework for network analyses in neurosciences on bioRxiv

BRAPH 2 Genesis enables swift creation of custom, reproducible software distributions—tackling the growing complexity of neuroscience by streamlining analysis across diverse data types and workflows. (Image by B. Zufiria-Gerbolés and Y.-W. Chang.)
BRAPH 2: a flexible, open-source, reproducible, community-oriented, easy-to-use framework for network analyses in neurosciences
Yu-Wei Chang, Blanca Zufiria-Gerbolés, Pablo Emiliano Gómez-Ruiz, Anna Canal-Garcia, Hang Zhao, Mite Mijalkov, Joana Braga Pereira, Giovanni Volpe
bioRxiv: 10.1101/2025.04.11.648455

As network analyses in neuroscience continue to grow in both complexity and size, flexible methods are urgently needed to provide unbiased, reproducible insights into brain function. BRAPH 2 is a versatile, open-source framework that meets this challenge by offering streamlined workflows for advanced statistical models and deep learning in a community-oriented environment. Through its Genesis compiler, users can build specialized distributions with custom pipelines, ensuring flexibility and scalability across diverse research domains. These powerful capabilities will ensure reproducibility and accelerate discoveries in neuroscience.