Extracting quantitative biological information from brightfield cell images using deep learning on ArXiv

Virtually-stained generated image for lipid-droplet.
Extracting quantitative biological information from brightfield cell images using deep learning
Saga Helgadottir, Benjamin Midtvedt, Jesús Pineda, Alan Sabirsh, Caroline B. Adiels, Stefano Romeo, Daniel Midtvedt, Giovanni Volpe
arXiv: 2012.12986

Quantitative analysis of cell structures is essential for biomedical and pharmaceutical research. The standard imaging approach relies on fluorescence microscopy, where cell structures of interest are labeled by chemical staining techniques. However, these techniques are often invasive and sometimes even toxic to the cells, in addition to being time-consuming, labor-intensive, and expensive. Here, we introduce an alternative deep-learning-powered approach based on the analysis of brightfield images by a conditional generative adversarial neural network (cGAN). We show that this approach can extract information from the brightfield images to generate virtually-stained images, which can be used in subsequent downstream quantitative analyses of cell structures. Specifically, we train a cGAN to virtually stain lipid droplets, cytoplasm, and nuclei using brightfield images of human stem-cell-derived fat cells (adipocytes), which are of particular interest for nanomedicine and vaccine development. Subsequently, we use these virtually-stained images to extract quantitative measures about these cell structures. Generating virtually-stained fluorescence images is less invasive, less expensive, and more reproducible than standard chemical staining; furthermore, it frees up the fluorescence microscopy channels for other analytical probes, thus increasing the amount of information that can be extracted from each cell.

Quantitative Digital Microscopy with Deep Learning on ArXiv

Particle tracking and characterization in terms of radius and refractive index.

Quantitative Digital Microscopy with Deep Learning
Benjamin Midtvedt, Saga Helgadottir, Aykut Argun, Jesús Pineda, Daniel Midtvedt, Giovanni Volpe
arXiv: 2010.08260

Video microscopy has a long history of providing insights and breakthroughs for a broad range of disciplines, from physics to biology. Image analysis to extract quantitative information from video microscopy data has traditionally relied on algorithmic approaches, which are often difficult to implement, time consuming, and computationally expensive. Recently, alternative data-driven approaches using deep learning have greatly improved quantitative digital microscopy, potentially offering automatized, accurate, and fast image analysis. However, the combination of deep learning and video microscopy remains underutilized primarily due to the steep learning curve involved in developing custom deep-learning solutions. To overcome this issue, we introduce a software, DeepTrack 2.0, to design, train and validate deep-learning solutions for digital microscopy. We use it to exemplify how deep learning can be employed for a broad range of applications, from particle localization, tracking and characterization to cell counting and classification. Thanks to its user-friendly graphical interface, DeepTrack 2.0 can be easily customized for user-specific applications, and, thanks to its open-source object-oriented programming, it can be easily expanded to add features and functionalities, potentially introducing deep-learning-enhanced video microscopy to a far wider audience.

Jesús Pineda joins the Soft Matter Lab

Jesús Pineda starts his PhD at the Physics Department of the University of Gothenburg on 16th September 2020.

Jesús has a Master degree in Electrical and Electronic Engineering at the Universidad Tecnológica de Bolívar, Cartagena, Colombia.

In his PhD, he will focus on Microscopy and Deep Learning.

Soft Matter Lab presentations at the SPIE Optics+Photonics Digital Forum

Seven members of the Soft Matter Lab (Saga HelgadottirBenjamin Midtvedt, Aykut Argun, Laura Pérez-GarciaDaniel MidtvedtHarshith BachimanchiEmiliano Gómez) were selected for oral and poster presentations at the SPIE Optics+Photonics Digital Forum, August 24-28, 2020.

The SPIE digital forum is a free, online only event.
The registration for the Digital Forum includes access to all presentations and proceedings.

The Soft Matter Lab contributions are part of the SPIE Nanoscience + Engineering conferences, namely the conference on Emerging Topics in Artificial Intelligence 2020 and the conference on Optical Trapping and Optical Micromanipulation XVII.

The contributions being presented are listed below, including also the presentations co-authored by Giovanni Volpe.

Note: the presentation times are indicated according to PDT (Pacific Daylight Time) (GMT-7)

Emerging Topics in Artificial Intelligence 2020

Saga Helgadottir
Digital video microscopy with deep learning (Invited Paper)
26 August 2020, 10:30 AM
SPIE Link: here.

Aykut Argun
Calibration of force fields using recurrent neural networks
26 August 2020, 8:30 AM
SPIE Link: here.

Laura Pérez-García
Deep-learning enhanced light-sheet microscopy
25 August 2020, 9:10 AM
SPIE Link: here.

Daniel Midtvedt
Holographic characterization of subwavelength particles enhanced by deep learning
24 August 2020, 2:40 PM
SPIE Link: here.

Benjamin Midtvedt
DeepTrack: A comprehensive deep learning framework for digital microscopy
26 August 2020, 11:40 AM
SPIE Link: here.

Gorka Muñoz-Gil
The anomalous diffusion challenge: Single trajectory characterisation as a competition
26 August 2020, 12:00 PM
SPIE Link: here.

Meera Srikrishna
Brain tissue segmentation using U-Nets in cranial CT scans
25 August 2020, 2:00 PM
SPIE Link: here.

Juan S. Sierra
Automated corneal endothelium image segmentation in the presence of cornea guttata via convolutional neural networks
26 August 2020, 11:50 AM
SPIE Link: here.

Harshith Bachimanchi
Digital holographic microscopy driven by deep learning: A study on marine planktons (Poster)
24 August 2020, 5:30 PM
SPIE Link: here.

Emiliano Gómez
BRAPH 2.0: Software for the analysis of brain connectivity with graph theory (Poster)
24 August 2020, 5:30 PM
SPIE Link: here.

Optical Trapping and Optical Micromanipulation XVII

Laura Pérez-García
Reconstructing complex force fields with optical tweezers
24 August 2020, 5:00 PM
SPIE Link: here.

Alejandro V. Arzola
Direct visualization of the spin-orbit angular momentum conversion in optical trapping
25 August 2020, 10:40 AM
SPIE Link: here.

Isaac Lenton
Illuminating the complex behaviour of particles in optical traps with machine learning
26 August 2020, 9:10 AM
SPIE Link: here.

Fatemeh Kalantarifard
Optical trapping of microparticles and yeast cells at ultra-low intensity by intracavity nonlinear feedback forces
24 August 2020, 11:10 AM
SPIE Link: here.

Note: the presentation times are indicated according to PDT (Pacific Daylight Time) (GMT-7)

Seminar on Robust automated reading of the skin prick test via 3D imaging and parametric surface fitting by Jesús Pineda from Universidad Tecnologica de Bolivar, Nexus, 3 March 2020

Robust automated reading of the skin prick test via 3D imaging and parametric surface fitting.
Seminar by Jesús Pineda from the Universidad Tecnologica de Bolivar, Cartagena, Colombia.

The conventional reading of the skin prick test (SPT) for diagnosing allergies is prone to inter- and intra-observer variations. Drawing the contours of the skin wheals from the SPT and scanning them for computer processing is cumbersome. However, 3D scanning technology promises the best results in terms of accuracy, fast acquisition, and processing. In this work, we present a wide-field 3D imaging system for the 3D reconstruction of the SPT, and we propose an automated method for the measurement of the skin wheals. The automated measurement is based on pyramidal decomposition and parametric 3D surface fitting for estimating the sizes of the wheals directly. We proposed two parametric models for the diameter estimation. Model 1 is based on an inverted Elliptical Paraboloid function, and model 2 on a super-Gaussian function. The accuracy of the 3D imaging system was evaluated with validation objects obtaining transversal and depth accuracies within ± 0.1 mm and ± 0.01 mm, respectively. We tested the method on 80 SPTs conducted in volunteer subjects, which resulted in 61 detected wheals. We analyzed the accuracy of the models against manual reference measurements from a physician and obtained that the parametric model 2 on average yields diameters closer to the reference measurements (model 1: -0.398 mm vs. model 2: -0.339 mm) with narrower 95% limits of agreement (model 1: [-1.58, 0.78] mm vs. model 2: [-1.39, 0.71] mm) in a Bland-Altman analysis. In one subject, we tested the reproducibility of the method by registering the forearm under five different poses obtaining a maximum coefficient of variation of 5.24% in the estimated wheal diameters. The proposed method delivers accurate and reproducible measurements of the SPT [1].

References:

  1. Jesus Pineda, Raul Vargas, Lenny A. Romero, Javier Marrugo, Jaime Meneses & Andres G. Marrugo (2019) Robust automated reading of the skin prick test via 3D imaging and parametric surface fitting. PLOS ONE 14(10), e0223623.

Place: Nexus room, Fysik Origo, Fysik
Time: 03 March, 2020, 11:00