Corneal endothelium assessment in specular microscopy images with Fuchs’ dystrophy via deep regression of signed distance maps on ArXiv

Example of final segmentation with the UNet-dm of the specular microscopy image of a severe case of cornea guttata. (Image by the Authors of the manuscript.)
Corneal endothelium assessment in specular microscopy images with Fuchs’ dystrophy via deep regression of signed distance maps
Juan S. Sierra, Jesus Pineda, Daniela Rueda, Alejandro Tello, Angelica M. Prada, Virgilio Galvis, Giovanni Volpe, Maria S. Millan, Lenny A. Romero, Andres G. Marrugo
arXiv: 2210.07102

Specular microscopy assessment of the human corneal endothelium (CE) in Fuchs’ dystrophy is challenging due to the presence of dark image regions called guttae. This paper proposes a UNet-based segmentation approach that requires minimal post-processing and achieves reliable CE morphometric assessment and guttae identification across all degrees of Fuchs’ dystrophy. We cast the segmentation problem as a regression task of the cell and gutta signed distance maps instead of a pixel-level classification task as typically done with UNets. Compared to the conventional UNet classification approach, the distance-map regression approach converges faster in clinically relevant parameters. It also produces morphometric parameters that agree with the manually-segmented ground-truth data, namely the average cell density difference of -41.9 cells/mm2 (95% confidence interval (CI) [-306.2, 222.5]) and the average difference of mean cell area of 14.8 um2 (95% CI [-41.9, 71.5]). These results suggest a promising alternative for CE assessment.

Presentation by J. Pineda at ISMC 2022, Poznan, 19 September 2022

Input graph structure including a redundant number of edges. (Image by J. Pineda.)
Revealing the spatiotemporal fingerprint of microscopic motion using geometric deep learning
Jesús Pineda, Benjamin Midtvedt, Harshith Bachimanchi, Sergio Noé, Daniel Midtvedt, Giovanni Volpe, and Carlo Manzo
Submitted to ISMC 2022
Date: 19 September 2022
Time: 13:40 (CEST)

The characterization of dynamical processes in living systems provides important clues for their mechanistic interpretation and link to biological functions. Thanks to recent advances in microscopy techniques, it is now possible to routinely record the motion of cells, organelles, and individual molecules at multiple spatiotemporal scales in physiological conditions. However, the automated analysis of dynamics occurring in crowded and complex environments still lags behind the acquisition of microscopic image sequences. Here, we present a framework based on geometric deep learning that achieves the accurate estimation of dynamical properties in various biologically-relevant scenarios. This deep-learning approach relies on a graph neural network enhanced by attention-based components. By processing object features with geometric priors, the network is capable of performing multiple tasks, from linking coordinates into trajectories to inferring local and global dynamic properties. We demonstrate the flexibility and reliability of this approach by applying it to real and simulated data corresponding to a broad range of biological experiments.

Soft Matter Lab members present at ISMC 2022, Poznan, 19-23 September 2022

The Soft Matter Lab participates to the ISMC 2022 in Poznan, Poland, 19-23 September 2022, with the presentations listed below.

Dynamic live/apoptotic cell assay using phase-contrast imaging and deep learning on bioRxiv

Phase-contrast image before virtual staining. (Image by the Authors.)
Dynamic live/apoptotic cell assay using phase-contrast imaging and deep learning
Zofia Korczak, Jesús Pineda, Saga Helgadottir, Benjamin Midtvedt, Mattias Goksör, Giovanni Volpe, Caroline B. Adiels
bioRxiv: https://doi.org/10.1101/2022.07.18.500422

Chemical live/dead assay has a long history of providing information about the viability of cells cultured in vitro. The standard methods rely on imaging chemically-stained cells using fluorescence microscopy and further analysis of the obtained images to retrieve the proportion of living cells in the sample. However, such a technique is not only time-consuming but also invasive. Due to the toxicity of chemical dyes, once a sample is stained, it is discarded, meaning that longitudinal studies are impossible using this approach. Further, information about when cells start programmed cell death (apoptosis) is more relevant for dynamic studies. Here, we present an alternative method where cell images from phase-contrast time-lapse microscopy are virtually-stained using deep learning. In this study, human endothelial cells are stained live or apoptotic and subsequently counted using the self-supervised single-shot deep-learning technique (LodeSTAR). Our approach is less labour-intensive than traditional chemical staining procedures and provides dynamic live/apoptotic cell ratios from a continuous cell population with minimal impact. Further, it can be used to extract data from dense cell samples, where manual counting is unfeasible.

DeepTrack won the pitching competition at the Startup Camp 2022. Congrats!

DeepTrack team members (left to right) Henrik, Giovanni and Jesus. (Picture by Jonas Sandwall, Chalmers Ventures.)
The DeepTrack team, composed by Henrik Klein Moberg, Jesus Pineda, Benjamin Midtvedt and Giovanni Volpe, won the pitching competition at the Startup Camp 2022 organised by Chalmers Ventures.

In the event, held on Tuesday, 15 March 2022, 16:00-19:00, the ten teams that had gone through the training at the Startup Camp and developed their company ideas, pitched their companies on stage to a panel of entrepreneur experts, the other nine teams, and all business coaches at Chalmers Ventures. DeepTrack obtained the first place among the ten participants. Congrats!

Here a few pictures from the final pitching event of the Startup Camp.

Henrik. (Picture by Jonas Sandwall, Chalmers Ventures.)
DeepTrack team members (left to right) Henrik, Giovanni and Jesus. (Picture by Jonas Sandwall, Chalmers Ventures.)
Panelists. (Picture by Jonas Sandwall, Chalmers Ventures.)

Featured in:
University of Gothenburg – News and Events: AI tool that analyses microscope images won startup competition and AI-verktyg som analyserar mikroskopbilder vann startup-tävling
(Swedish)

Single-shot self-supervised particle tracking on ArXiv

LodeSTAR tracks the plankton Noctiluca scintillans. (Image by the Authors of the manuscript.)
Single-shot self-supervised particle tracking
Benjamin Midtvedt, Jesús Pineda, Fredrik Skärberg, Erik Olsén, Harshith Bachimanchi, Emelie Wesén, Elin K. Esbjörner, Erik Selander, Fredrik Höök, Daniel Midtvedt, Giovanni Volpe
arXiv: 2202.13546

Particle tracking is a fundamental task in digital microscopy. Recently, machine-learning approaches have made great strides in overcoming the limitations of more classical approaches. The training of state-of-the-art machine-learning methods almost universally relies on either vast amounts of labeled experimental data or the ability to numerically simulate realistic datasets. However, the data produced by experiments are often challenging to label and cannot be easily reproduced numerically. Here, we propose a novel deep-learning method, named LodeSTAR (Low-shot deep Symmetric Tracking And Regression), that learns to tracks objects with sub-pixel accuracy from a single unlabeled experimental image. This is made possible by exploiting the inherent roto-translational symmetries of the data. We demonstrate that LodeSTAR outperforms traditional methods in terms of accuracy. Furthermore, we analyze challenging experimental data containing densely packed cells or noisy backgrounds. We also exploit additional symmetries to extend the measurable particle properties to the particle’s vertical position by propagating the signal in Fourier space and its polarizability by scaling the signal strength. Thanks to the ability to train deep-learning models with a single unlabeled image, LodeSTAR can accelerate the development of high-quality microscopic analysis pipelines for engineering, biology, and medicine.

Geometric deep learning reveals the spatiotemporal fingerprint of microscopic motion on ArXiv

Input graph structure including a redundant number of edges. (Image by J. Pineda.)
Geometric deep learning reveals the spatiotemporal fingerprint of microscopic motion
Jesús Pineda, Benjamin Midtvedt, Harshith Bachimanchi, Sergio Noé, Daniel Midtvedt, Giovanni Volpe, Carlo Manzo
arXiv: 2202.06355

The characterization of dynamical processes in living systems provides important clues for their mechanistic interpretation and link to biological functions. Thanks to recent advances in microscopy techniques, it is now possible to routinely record the motion of cells, organelles, and individual molecules at multiple spatiotemporal scales in physiological conditions. However, the automated analysis of dynamics occurring in crowded and complex environments still lags behind the acquisition of microscopic image sequences. Here, we present a framework based on geometric deep learning that achieves the accurate estimation of dynamical properties in various biologically-relevant scenarios. This deep-learning approach relies on a graph neural network enhanced by attention-based components. By processing object features with geometric priors, the network is capable of performing multiple tasks, from linking coordinates into trajectories to inferring local and global dynamic properties. We demonstrate the flexibility and reliability of this approach by applying it to real and simulated data corresponding to a broad range of biological experiments.

Press release on Active Droploids

The article Active Droploids has been featured in a press release of the University of Gothenburg.

The study, published in Nature Communications, examines a special system of colloidal particles and demonstrates a new kind of active matter, which interacts with and modifies its environment. In the long run, the result of the study can be used for drug delivery inside the human body or to perform sensing of environmental pollutants and their clean-up.

Here the links to the press releases:
English: Feedback creates a new class of active biomimetic materials.
Swedish: Feedback möjliggör en ny form av aktiva biomimetiska material.

The article has been features also in Mirage News, Science Daily, Phys.org, Innovations Report, Informationsdienst Wissenschaft (idw) online, Nanowerk.

Active droploids published in Nature Communications

Active droploids. (Image taken from the article.)
Active droploids
Jens Grauer, Falko Schmidt, Jesús Pineda, Benjamin Midtvedt, Hartmut Löwen, Giovanni Volpe & Benno Liebchen
Nat. Commun. 12, 6005 (2021)
doi: 10.1038/s41467-021-26319-3
arXiv: 2109.10677

Active matter comprises self-driven units, such as bacteria and synthetic microswimmers, that can spontaneously form complex patterns and assemble into functional microdevices. These processes are possible thanks to the out-of-equilibrium nature of active-matter systems, fueled by a one-way free-energy flow from the environment into the system. Here, we take the next step in the evolution of active matter by realizing a two-way coupling between active particles and their environment, where active particles act back on the environment giving rise to the formation of superstructures. In experiments and simulations we observe that, under light-illumination, colloidal particles and their near-critical environment create mutually-coupled co-evolving structures. These structures unify in the form of active superstructures featuring a droplet shape and a colloidal engine inducing self-propulsion. We call them active droploids—a portmanteau of droplet and colloids. Our results provide a pathway to create active superstructures through environmental feedback.

Press release on Extracting quantitative biological information from bright-field cell images using deep learning

Virtually-stained generated image for lipid-droplet.

The article Extracting quantitative biological information from bright-field cell images using deep learning has been featured in a press release of the University of Gothenburg.

The study, recently published in Biophysics Reviews, shows how artificial intelligence can be used to develop faster, cheaper and more reliable information about cells, while also eliminating the disadvantages from using chemicals in the process.

Here the links to the press releases on Cision:
Swedish: Effektivare studier av celler med ny AI-metod
English: More effective cell studies using new AI method

Here the links to the press releases in the News of the University of Gothenburg:
Swedish: Effektivare studier av celler med ny AI-metod
English: More effective cell studies using new AI method